My big prejudice has to do with the folks making the big claims here. Mearsheimer and Walt have not been fans of my home institution, UCSD, for quite sometime, engaging in a war of letters in the early 90's just when I was on the job market for the first time. They were upset with how UCSD engaged in hiring in Security. So, I hold onto a grudge pretty well and tend to get a bit fired up when I read some of their stuff. When they lament the death of the Old Boys network, as they did in the earlier draft of their hypothesis testing piece, I get even more fired up:
Instead of relying on “old boy” networks, a professionalized field will use indicators of merit that appear to be impersonal and universal. In the academy, this tendency leads to the use of “objective” criteria—such as citation counts—when making hiring and promotion decisions."The context here is criticism of the professionalization of the IR profession. Funny, for those who write about civil-military relations to think that professionalization is problematic. Anyhow, one could argue that their problem is with how professionalization has been operationalized, that they are not lamenting the demise of the old boy network, but I cannot help but think otherwise given the context. They dropped the old boy reference in the final draft but kept the language about how problematic professionalization is.
Over time, professions also tend to adopt simple and impersonal ways to evaluateIn the aftermath of the Duck crisis of August 2013 and of discussions of networking and gender before and at APSA, I simply cannot ignore this critique of professionalism here. Perhaps citation counts and numbers of articles in top journals are simplistic measures of success and that the quality of ideas, which cannot be quantified, is more important. But given the past (and presence) of discrimination, I think imperfect objective criteria are better than the old ways, even if it still cuts against women (since women are cited less). We can figure out what the gender biases in citations are via, dare I say it, hypothesis testing (again, see the IO paper or see Sara Mitchell's stuff), and then address and control for such biases. We cannot really do the same if we fall back on the old ways, which were unkind to women (not being boys and all).
members. In the academy, this tendency leads to heavy reliance on ‘objective’ criteria — such as citation counts — in hiring and promotion decisions. In some cases, department members and university administrators might think that they do not have to read a scholar’s work and form an independent opinion of its quality. Instead, they can simply calculate the individual’s ‘h-index’ (Hirsch, 2005) and make personnel decisions on that basis.
, I really hope that the tenure letters that people write, which do get into the quality of work, matter or else many of us (including me right now) are wasting a lot of time. Indeed, one of the M&W recommendations is for promotion committees to focus on just a few (three or four) pieces to evaluate a candidate, rather than the quantity, of output. I don't know about them, but when I write an evaluation for tenure/promotion, I tend to focus on the quality of a few key works. I do address the larger record, but I tend to only go into detail about a handful. I have limited time and crappy time management skills.
Anyhow, I am very biased in all of this. So, take what I say with a huge grain of salt, as illustrated by a student I once had in my big Intro to IR class.
No comments:
Post a Comment