I've been thinking about the Philosophical Gourmet Report recently. When Leiter started it way back when, he says that it was to act as a guide for grad students, to make sure they didn't get suckered into attending a program that was subpar with the hopes of being well-positioned for a job. Since those early days, the PGR has grown substantially, both in content, participation, and influence. (Shit, it's published by Blackwell!) And Leiter is no longer the Grand Poobah; Berit Brogaard and Christopher Pynes are.
What does the PGR do? From the website, "This Report tries to capture existing professional sentiment about the quality and reputation of different Ph.D. programs as a whole and in specialty areas in the English-speaking world." That's all well and good, but what do we want with the captured professional sentiment? What do we do with info like, "Population P expresses a preference for Program A over Program B"?
Here are two possibilities. (1) we use the info to make guesses about the likelihood of the professional success of graduates from a program, (2) we use the info to make guesses about the quality of philosophy being done at a program. (1) suggests that the PGR is helpful for students deciding where to go to increase their odds of getting a job. (2) suggests that the PGR is helpful for students deciding where to go to do really good philosophy. We'll do (1) today and (2) another day.
(1) Measuring professional success
First things first: what the heck is meant by "professional success"? Here are a few candidates:
(a) getting job offers/positions, in particular tenure-track offers/positions
(c) publishing and being cited
... or some combination of these.
So one way to interpret the findings of the PGR is to say that program's being more highly ranked increase the likelihood that graduates from that program will experience greater professional success. In less Byzantine prose: program ranking positively correlates with professional success.
The PGR makes this kind of a claim on its website: "These sentiments correlate fairly well with job placement of junior faculty in continuing positions in universities and colleges..."
Here's my criticism: if what we're really interested in are correlations between programs and job placements, then surveying philosophers is an indirect and shitty measure. Why? We're asking philosophers to rank a program from 0 (inadequate) to 5 (distinguished) based on the department's current faculty. What philosophers think about a group of philosophers employed by the same department gives an impression of what philosophers think about that department, but not necessarily of whether that department does a good job at placing its students. The method delivers the opinions of philosophers about other departments, not whether departments do a good job of placing graduates.
"Maybe we're not interested in placement. Maybe we want to rank departments for other reasons!" Cool cool cool cool cool. But given the living hell that is the philosophy job market, I don't think folks should be concerned about ranking departments if it predictions about grad placement aren't part of why we're ranking in the first place. If we're not interested in ranking for the purpose of helping people get jobs, then what the hell are we doing?
Further, asking for opinions about a department is subject to a wide variety of cognitive biases, like the availability heuristic.
"Ok smart guy, if this method is insufficient what's a better one?" I'm glad you asked. My answer isn't all that deep; but, if you want to know how well a department does placing its graduates, look at the department's placement rates. It's a direct measure of what we're interested in. So rather than ranking programs based on philosophers' opinions, rank programs based on their placement rates.
"Ok smart guy, but it says on the PGR's website that grad students should use placement data when making their decision where to go to grad school." That's cool, but then what value does the PGR add? What can the PGR predict that placement rates can't (or can't predict as well)?
One assumption of the methodology of the PGR is that philosophers have some accurate sense of program placement rates, but seems implausible. Do any of these folks know what Harvard's or Brown's or CUNY's or Villanova's placement rates were last year?
Another assumption that the method needs is that the quality of the philosophers a program has is an indicator of how well they place students. Higher quality philosophers mean higher placement rates. But this, if true, isn't obvious at all. And it's not obvious how we would measure it to come up with evidence for it. We can get numbers for placement rates, sure, but what's our metric for quality philosophy? Surveys, probably. In that case we're left with looking for correlations between subjective measures of philosophical quality and placement rates. AT BEST this might give us evidence for whether impressions of philosophical quality correlate with placement rates. But then we're really looking at philosophers' impressions, not program quality, and their correlations with objective phenomena.
That's all for now. Later I'll consider whether the PGR is good for getting a measure of the quality of philosophy being done at a program.
But first, some other general gripes:
1. We get mean values, but why not median and mode?
2. What about standard deviation or other measures of variance?
3. Why aren't the raw data available?
These are easily remedied gripes and I hope that the editors fix them soon!