Previously I argued that the PGR is not a good measure of likelihood of job placement. I came across this post from Leiter Reports which suggests that the PGR isn't just for placements simpliciter but good placements, where "good" means something like "a high-quality, PhD-granting institution." How do we determine if one good placement is better than another? By the hiring-institution's ranking on the PGR.
I don't find this argument -- that PGR is a good measure of good job placement and not just job placement -- terribly interesting. First, there are lots of reasons why someone wouldn't want to work at a high-PGR school. I like my Jesuit SLAC because we focus on teaching and I have a lot of support from the admin to try new ideas, both in teaching and research. And nobody is anal about the job: most of us pursue some kind of work-life balance and we have the support of our colleagues in pursuing it. Second, HAVE YOU SEEN THE GODDAMN JOB MARKET LATELY??? You might lovingly call it a "shit-show" but that's kind of an insult to shit-shows. My sample size is small and biased, but the jaw of every non-philosopher academic I've talked to drops when I tell them that it's normal to get 300-500 applications for an open/open position. While nobody wants a toxic work environment (which, I'm guessing occurs at all levels of professional philosophy), I think many folks consider themselves lucky to find a job in academia. So using the PGR as a way of predicting good job placements is out of step with lived realities of finding an academic job. The market has driven us from wanting good jobs to wanting merely jobs. We've got loans to pay and mouths to feed. That all brings me to today's subject: perhaps the PGR is good at predicting the quality of philosophy one hopes to do. On average, the suggestion goes, products of higher-ranking PGR programs do better philosophy than products at lower-ranking PGR programs. Now if you're thinking to yourself, "hey not-so-smart guy, you went to Fordham, which isn't very highly ranked, so clearly you've got an axe to grind!" lemme stop you right there: I have very little love for my alma mater for a wide variety of reasons. I have no desire whatsoever to budge Fordham's spot in the PGR. Back to the matter at hand. First, the biggest issue: if we're going to say that PGR ranking predicts quality of philosophy that is likely to be done by graduates, we need some metric of quality of philosophy. As far as I know, the only one offered is the same for smut: you know it when you see it. So let's run with this for a second. We know good philosophy when we see it. Presumably this means something like, "in reading certain kinds of philosophy, we experience X," where 'X' is short for a set of positive of thoughts and feelings. And I'm sure we've all had this experience. Reading William James got me into philosophy in the first place, and I've gotten that feeling reading (in no particular order) Aristotle, Wittgenstein, Andy Clark, Susan Stebbing, Alva Noe, Jenny Saul, Alvin Goldman, Mary Midgley and Richard Menary, among many others. The PGR, then, predicts the likelihood that graduates of more highly-ranked programs will write material that enables you to experience X than graduates of lower-ranked programs, provided your tastes are like those of the raters. On this view, the PGR works kind of like a ranking of vineyards or (what I'm more familiar with) breweries. Beer-ranking experts might say that products of brewery A are superior to products of brewery B. On this view, the best way to think about the PGR is as a taste-guide for consumers of philosophy: experts agree that the philosophy of mind coming out of NYU's grads is superior to that coming out of Stanford's grads. Fortunately, there's a relatively easy way to see if the PGR makes good predictions on this score. (Relative in principle, at least; we don't need anything like Twin Earth or LaPlace's Demon.) And this is a study that the APA should definitely fund. Pick some number of grads from schools at every tier of the PGR. (I think you could do this with the general ranking but it might work better with the specialties.) Commission them to write a short (~3k) paper of their choosing, but prepare it like they would peer-review: absolutely no identifying information. (In exchange, perhaps these papers could come out in a special issue of the Journal of the APA, or somehow of other compensate authors for their time.) Then, give these papers to other philosophers and have them guess the tier of school the author comes from. We can instruct raters that they are to pay attention to the "you know it when you see it" criterion for quality philosophy, since that's what we agreed to at the start of this investigation. If it turns out that raters are pretty accurate, then we might regard the PGR as a reliable guide to the quality of philosophy their grads produce. Now we began this by suggesting that "good philosophy" produces a kind of feeling. But what if that's the wrong metric? Maybe we rank "good philosophy" by some weighting of publications, citations, awards, grants, whatever. On this suggestion, higher-ranked PGR programs produce grads who produce more papers at higher-ranked venues that are more-often cited (and so on) and also win more grants than lower-ranked PGR programs. I think I have some beef with that definition of "good philosophy" but let's run with it for now. That's a (relatively) simple task that can be done by gathering publicly available data. (At least, I think most of it is publicly available.) So then we just need someone with the patience, financial support, and data analysis chops to gather publication info, citation rates, and grant-winner info (prob from NEH, NSF, and Templeton among others). These can be given weights, or a range of weights, and then we can see if the PGR makes good predictions. Either way, the key is to remember: 1. the PGR is a survey expressing preferences, 2. if the survey data is to be useful, it has to make predictions about grads, and 3. these predictions are testable.
0 Comments
Leave a Reply. |
About me
I do mind and epistemology and have an irrational interest in data analysis and agent-based modeling. This blog is about job market analyses. Old
|