CHARLES LASSITER, PHILOSOPHY, GONZAGA UNIVERSITY
  • Home
  • About
  • Research
  • Teaching
  • Blog
  • Blog Data
  • Home
  • About
  • Research
  • Teaching
  • Blog
  • Blog Data
CHARLES LASSITER, PHILOSOPHY, GONZAGA UNIVERSITY

Philosophy by number:
reporting on the job market in academic philosophy

Picture

2018-2019 job cycle: how many and where

3/4/2020

1 Comment

 
I recently downloaded some data from PhilJobs on job openings in the past few years. I wanted to share some findings from that dataset: some where's and when's.

The data has a number of variables of interest: hiring school, kind of job (academic or not), date the ad was posted, date apps are due, level of job (postdoc, junior, etc) and contract type (TT, fixed-term, etc), among others. With these data, we can ask some interesting questions. Here's what we're looking at today.

1. 
how many jobs were there?
2.
where were the jobs? like literally, where in the world were the jobs? 

I filtered the data for three kinds of job: junior faculty, postdoc and open rank hires. (FWIW filtering for "junior" doesn't discriminate between TT and fixed-term. We'll have a chance to separate out TT and fixed-term later.) We might look at admin jobs or senior hires, but I'm not terribly concerned about those groups, tbh. When reflecting on how terrible the job market is for philosophers, the usual victim is a newly-minted (or newishly-minted) PhD looking for steady employment.

And as far as the boundaries of the hiring cycle, I used August 1 to July 31 to define 'hiring cycle'. As we'll see, jobs are posted year-round, but these bounds feel natural for identifying the beginning and the end of a cycle.

But before we completely leave the non-junior jobs behind, let's look quickly at the number of jobs posted last year in each category.


Picture
So during the 2018-2019 hiring cycle, there were roughly 180 junior jobs, 70-ish postdocs, and 60-ish open-rank. Let's zoom in on the junior faculty
Picture
So 110 or so jobs are TT and 65 or so are fixed-term. 10-ish are junior but "tenured, continuing, or permanent." I imagine this is a catch-all category for "junior faculty, long-term"

Now, let's look at where the jobs are. I used the ggmap package. Word to the wise: this package is awesome, but make sure you (i) get your google apps key and (ii) enable the static maps API. (Details are in the documentation for ggmap.) It took me forever to figure out that I didn't tell Google that I wanted to use the static maps.

Anywho, we'll start with postdocs, junior faculty, and open rank and then focus on junior positions.

Picture
So here's something interesting: most jobs are on the east coast -- particularly the northeast corridor -- or California. No jobs in the Dakotas, Utah, Wyoming, or New Mexico.

Now let's look at junior faculty jobs, and we'll color the dots by contract type.
Picture
There's no clear pattern to TT vs fixed-term jobs in terms of location. CA had a bunch of TT jobs. So did the northeast.

What's the takeaway? There's a big divide in the density of job openings east and west of the Mississippi River. Once you move west of those states whose eastern borders are the Mississippi, jobs are a lot scarcer. Beyond that, there are higher concentrations of jobs in (1) California and (2) the northeast (NJ, NY, CT, MA...and also some bordering states like PA, VA, and MD). If you get your PhD on the east coast or CA, you have a decent chance of staying there. 

So I says to myself I says, "I wonder how this compares with median household income per county..." Here's the map:

Picture
Hmmm.... nothing obvious. The northeast and Midatlantic regions have a higher median income per family than lots of other places in the country. (Holy shit -- Northern Virginia has a median income of $125k! That's higher than the Bay Area! And I'm astonished because I grew up the son of a handyman in NoVa.) More detailed investigations might yield something, but now it doesn't look like much.
1 Comment

"Leiter Year in Review" Review

1/27/2020

0 Comments

 
"So Charlie, why are you doing this given your stack of grading and journal deadlines?" you might be wondering. Well get off my back! You're not my real mom!

I have an abiding interest in the culture of professional philosophy. A lot of that culture is moving online, which is handy for folks like me who know enough about coding to get themselves into trouble and just enough to get themselves out of it (most of the time). LR boasts on his blog that it's the "world's most popular philosophy blog since 2003". (Maybe that's true? IDK if he's got the stats for Daily Nous. But of course "since 2003" might describe the blog's birth year and not how long it's been most popular.) It's influential in our profession; I have no doubt that many people read it. I also know that at least some people have VERY STRONG OPINIONS about LR and BL himself.

Given how popular it is, it's worth getting the big picture of the blog. If we wanted to describe the last year of blogging in a nutshell, we might say that LR is a lot like Slate or National Review: lots of good and important news interspersed with a lot of editorializing. How do we come to this conclusion? There is a lot of news about the profession and academia in general shared; there is also quite a bit of news about stifling academic freedom and rights to free speech; and more info on the PGR and job search advice. But along with all that are opinions expressed about Weinberg, Manne, Jenkins, Ichikawa, and the Twitter Red Guard. (I will note that there are no tags for Christa Peterson or Nathan Oseroff-Spicer, both of whom get mentions in the content of posts. BL tags tenured faculty at institutions with PhD programs, but not grad students. I, for one, can appreciate that. Maybe I'm reading too much into it, but it seems a bit more punching sideways than down, at least with respect to tags on LR.)

Leiter's Year in Review doesn't reflect his overall blogging patterns for the year (not that it has to). If I had to speculate, the posts picked are the most sensational ones and not the ones that reflect the blogging practices. You might get a more representative view of the blog by following the "Phil in the News" tag and then skimming some entries at random. 

Finally, maybe you're interesting in our three areas of investigation for the whole year? Here ya go.
Picture
Picture
Picture
0 Comments

Leiter Reports year in review, 4th quarter

1/24/2020

0 Comments

 
Go here for methods.
Picture
Lot's of blogging happening here -- only 5 days off! On average 2.9 posts per day. Looking at the shape of the plot, it seems a bit like a very sharp and compressed sine wave: days with one or fewer posts are usually followed by an upswing in posting, followed by a sharp decline. Not perfectly, of course, but close enough to suggest that high-posting days are followed by days of little posting until bottoming out in 0 or 1 posts on a day, with the cycle beginning again. 
Picture
Picture
This broadly follows the pattern from previous quarters. "Fascism Alerts" is underrepresented in BL's picks (unusually). Named-philosophers are are slightly over-represented in BL's picks. And the PGR is definitely over-represented in BL's picks.
0 Comments

Quick detour

1/24/2020

0 Comments

 
I mentioned before that "Phil in the News" is a ubiquitous tag on LR. Given that tag, what is it most likely to be paired with?
Picture

There's a good deal of pairing with tags about the profession, the academy, and news updates. In fact, those are the top 5 co-occurring tags. After that, we get into the conflict with the "New Infantilists" and more Leiter-centric interests. So given a "Phil in the News" tag, you'll find co-occurring tags dealing with issues in the profession a good deal of the time (around half, I think), and a lot of Leiter-centric tags beyond that. (Though keep in mind that "Issues in the profession" and "Of Cultural Interest" also co-occur with "New Infantilists" etc. A deeper dive would look at posts with three and four tags...maybe that'll come later.)
0 Comments

Leiter Reports year in review, 3rd quarter

1/24/2020

0 Comments

 
In case you're just joining us, the methods are here. 
Picture
Again, lots of blogging and the average for the quarter is 2.13 posts per day. Always good to take a step back during the summer months.
Picture
Picture
A few interesting things to note: individuals are again over-represented in BL's picks. I found the absence of "wankers" in the BL picks rather interesting. Granted, it doesn't take up a huge proportion of the total number of tags but even so there's more "wankers" than "Justin Weinberg" or "Jonathan Ichikawa". And "Kate Manne" is over-represented in BL's picks; so are "Humor", "Know-nothings", and "Twitter Red Guard".
Picture
Picture
The ones that stad out here are "Twitter Red Guard", "Justin Weinberg", "Jonathan Ichikawa", and "Kate Manne". They definitely glow a lot brighter in BL's picks than in the whole quarter. In fact, "Twitter Red Guard" and "Jonathan Ichikawa" always co-occur in BL's picks; but while "JI" always occurs with "TRG" in the 3rd quarter, there's more to "TRG" than just "JI".

Conclusions? Again, BL's picks for highlights of the 3rd quarter over-represent beefs he's got with individuals. 

0 Comments

Leiter Reports year in review, 2nd quarter

1/22/2020

0 Comments

 
In the last post, I gave details on methods, so I'll skip that for the next posts and get right to the data.  First, blogging volume.
Picture
There's no regression line to show the downward trend, but it's pretty visible. BL posted an average of 2.4 times per day and we can see he took more time off in the summer. Good for him to get some rest!

Now let's compare tag frequency for the whole 2nd quarter versus BL's picks.

Picture
Picture
"Phil in the News" is top for both and BL's picks reflect how frequently the tag was used in the 2nd quarter. "Fascism Alerts" is under represented in BL's picks and "New Infantilism" over-represented. And again, looking at blogging in the 2nd quarter more broadly, BL is concerned with the profession and its manifestations in the wider world. 

What's neat to see is that the "Phil in the News" tag isn't used as widely as it was in the 1st quarter (see below). Rather, the tag is used in a more focused way. It has a lot of co-occurrences with particular names -- Chris Bertram, Justin Weinberg, Jonathan Ichikawa, though not William Vallicella -- relative to the total number of instances of those names.  
Picture
Picture
Something interesting to observe is that, in the 2nd quarter, "Fascism Alerts" co-occurred quite frequently relative to "Cultural Interest" and "Phil in the News" but the former doesn't show up in BL's picks.  Instead, "Fascism Alerts" always (or nearly enough) occurred with "Academic Freedom", ""Phil in the News", "The Academy", and "New Infantilism." In fact, looking at the row "New Infantilism", "Academic Freedom" always (or nearly enough) occurs with it. So BL's picks for the 2nd quarter over -represents the co-occurrence of "Academic Freedom" and "New Infantilism". So if you were reading the Year in Review for the 2nd quarter, you'd get a skewed view of what was on LR. 
0 Comments

Leiter Reports year in review, 1st quarter

1/19/2020

0 Comments

 
Brian Leiter ("BL") posted his 2019 "Year in Review" on Leiter Reports ("LR"). I looked for criteria about how he picks them and didn't see anything. My guess is that he picks posts that he found compelling throughout the year.  So I says to myself I says, "I wonder how that compares with his blogging for the entire year." I'd been working on a project scraping all 300 or so pages of LR and thought I'd start out smaller: scraping his posts for 2019. (The entire LR scraping project will come along sooner or later. I'm off sabbatical now and have substantially less time.)

I used the rvest package created by the great Hadley Wickham. If you're already good with the tidyverse, then web-scraping with rvest is a pretty easy learning curve. Just make sure you read the tutorial on using the SelectorGadget! And thank goodness WordPress is so easy to scrape: pages are individuated by the base URL (http:\\... etc.) and then a sequence of numbers. 

The method here is pretty simple. I scraped LR for tags and dates. There are three questions I'm interested in answering:

1. what are blogging patterns like? i.e. how frequently does BL post on LR and how much each day?
2. what tags get used most frequently?
3. what's the frequency for co-occurring tags?

In this post and the next three, I'll answer these three questions. Then I'll attempt a wrap-up.

1. Frequency
To me, this is probably the least interesting question, but it's one that can be answered easily.

UPDATE: This was not as easy as I suspected (but it was still a useful exercise). Scraping LR gave me the dates that BL posted but it doesn't fill in the dates he doesn't post. Here's a good set of instructions for how to fill those in.

Here's a line graph showing the volume of blogging from 1 January 2019 to 31 March 2019.
Picture
BL is a prolific blogger. He took off from blogging only five days in the 1st quarter! At most, he's posted 6 times in a day and on average he posted 2.7 times a day. 
2. Tags
A quick methodological note: I often shortened the tag to something that captured the spirit of the tag but was much more readable on a plot. E.g. "The less they know, the less they know it" was shortened to "know-nothings." Also, all guest bloggers were collapsed into "guest" (but BL hasn't had many guest bloggers on in the last few years, so that's not a worry for right now). What tags got used the most during the 1st quarter? Here's the histogram.
Picture
'NA' indicates those days when BL didn't use a tag. So what's to see? BL's go-to tags are about philosophy in the news and stuff of cultural interest. He doesn't use LR for law school updates very much. In fact, the majority of the tagging is about issues related to the profession. BL posts quite a bit more about professional issues that what he colorfully calls the "Twitter Red Guard" and "The New Infantilism." Nonetheless, he has a special place on his blog (and in his heart?) for Justin Weinberg and a handful of other folks (as we'll see in the next few posts).

However! The "Phil in the News" and "The Academy" tags are often paired with "Justin Weinberg" and "The New Infantilism." So at least some of the professional-issue tags are also about the subculture in our discipline that prefers Daily Nous to LR. What can help clear this up is a plot looking at tag co-occurrence.

3. Tag co-occurrence 
This plot shows how often pairs of tags so up together. (This is a super helpful set of directions for computing the co-occurrence matrix.)  
Picture
So what does this tell us? Notice first that the plot isn't symmetrical: what's above the yellow diagonal isn't the mirror of what's below. So for any co-occurrence of two topics, you could have two different values. Look at "Academic Freedom" and "Justin Weinberg" for an example. To interpret the plot: wherever the co-occurrence, it's relative to the total number of occurrences of Topic 1 along that row.

Take "Academic Freedom" and "Justin Weinberg" to start. The plot tells us the co-occurrence of these tags relative to all occurrences of each. To find their frequency relative to "Academic Freedom", find the intersection of both but with "Academic Freedom" appearing as the row value. T
heir co-occurrence relative to all instances of "Academic Freedom" is rather low (~.07). But the co-occurrence of "Academic Freedom" and "Justin Weinberg" relative to all instances of "Justin Weinberg" is rather large (it's about .67). So Justin is one concern about academic freedom on BL's blog, but he's far from being the only one. But whenever BL is talking about Justin, it's often about academic freedom.

Why do it this way? It doesn't make sense to relativize everything to the topic with the greatest number of tags. It just swamps everything. "Fascism Alerts" and "Cultural Interest" co-occurred 10 times, but that's a blip against the total number of times "Phil in the News" showed up (which is 454). I tried relativizing to whichever of the Topic 1 and Topic 2 was larger. This makes a symmetrical plot but it papers over important info. If A and B have a co-occurrence value of .5, it's not clear whether that's relative to all instances of A or B. 

So what does the plot tell us? One thing that stands out is the rather light-colored column for "Phil in the News". This tells us that "Phil in the News" is a rather promiscuous tag, relative to how often other tags are used. This is confirmed by the relative dark shading of "Phil in the News" for Topic 1: no single tag stands out relative to the total number of tokens of "Phil in the News." 

A few other bright-colored spots:
  • notice that "PGR" (Philosophical Gourmet Report) & "Job Search Advice" relative to "PGR" is a bright yellow. So whenever BL is talking about the job search advice, it's in connection with PGR.
  • But given all posts about the PGR, BL might be talking about different things: job search advice, but also philosophy in the news, issues in the profession, and the nature of philosophy. This last thing is kinda interesting: given the total number of PGR tags, you get "What is Philosophy?" roughly 20% of the time (PGR as Topic 1 and "What is Philosophy?" as Topic 2). Looking at the histogram above, the PGR didn't get tagged much in the 1st quarter of 2019, but it's an interesting insight that it's tied to tags about the nature of philosophy as much as it is. 
  • given all instances of "academic freedom" (which has the 4th greatest number of tag-tokens), you'll find "Fascism Alerts" with it roughly 1/4 of the time and "The New Infantilism" a little less than that. 
  • BL doesn't often use the "Justin Weinberg" tag (see histogram) but when he does, you better believe it's coming with "The New Infantilism". 
  • And "wankers" is rare, but it's almost always news.  
Let's compare these last two plots to the posts BL picked out in his year in review. The method here is exactly the same as before. The only difference is in the scraping code: I went to the collection of links for the 1st quarter of the year in review, grabbed those links, and then scraped the tags. 
Picture
Some stuff to see here: "Phil in the News" is top again, but that's not surprising given how promiscuous the tag is. "PGR" and "Job search advice" occur a lot more often in BL's picks than in the 1st quarter overall. 

Let's look at the co-occurrence plot.  
Picture
And there's the previous one for easy reference:
Picture
Co-occurrence of "New Infantilism" and "Academic Freedom" relative to all instances of the former is greater in BL's picks than in the 1st quarter overall; samesies replacing "New Infantilism" with "The Academy". And "What is Philosophy" co-occurring with "PGR" relative to all instances of "What is Philosophy" is also overrepresented in BL's picks.
0 Comments

What is the PGR supposed to measure? Part 3

12/18/2019

0 Comments

 
There aren't arguments here but rather a disclaimer. Leiter will sometimes use the language of "taking down" the PGR or of a "slave revolt" among philosophers at lower-ranked PGR schools (on reflection I'm not 100% sure on this but it seems on-brand). Honestly, I give roughly zero fucks if I ever work at a ranked institution. That's not a thing that will add to my quality of life. My big concern is that the PGR is being sold as a ranking system and a guide to choosing grad programs. Now that's true: it's a ranking of programs based on surveys of philosophers. But whether it's a good guide to choosing a grad program is an entirely different ball of wax. Go back to the brewery analogy: if your tastes are for Rainier and PBR, then you're not going to care about the latest microbrew out of Portland, OR. You might like it. Or not. But it's your tasting that determines the liking, not the preferences of experts. And to say that the tastes of the experts are normative is worrisome. The experts might like the microbrew but it's not the case that you're failing if you don't. 

Anyhow, I'm not looking to replace or "take down" the PGR. I still have to wash the dishes no matter whether the PGR implodes tomorrow or becomes mandatory reading for undergrads. My main concern is for us as a group of professionals to get a better sense of what the data are telling us.

0 Comments

What is the PGR supposed to measure? Part 2

12/18/2019

0 Comments

 
Previously I argued that the PGR is not a good measure of likelihood of job placement. I came across this post from Leiter Reports which suggests that the PGR isn't just for placements simpliciter but good placements, where "good" means something like "a high-quality, PhD-granting institution." How do we determine if one good placement is better than another? By the hiring-institution's ranking on the PGR.

I don't find this argument -- that PGR is a good measure of good job placement and not just job placement -- terribly interesting. First, there are lots of reasons why someone wouldn't want to work at a high-PGR school. I like my Jesuit SLAC because we focus on teaching and I have a lot of support from the admin to try new ideas, both in teaching and research. And nobody is anal about the job: most of us pursue some kind of work-life balance and we have the support of our colleagues in pursuing it. 

Second, HAVE YOU SEEN THE GODDAMN JOB MARKET LATELY??? You might lovingly call it a "shit-show" but that's kind of an insult to shit-shows. My sample size is small and biased, but the jaw of every non-philosopher academic I've talked to drops when I tell them that it's normal to get 300-500 applications for an open/open position. While nobody wants a toxic work environment (which, I'm guessing occurs at all levels of professional philosophy), I think many folks consider themselves lucky to find a job in academia. So using the PGR as a way of predicting good job placements is out of step with lived realities of finding an academic job. The market has driven us from wanting good jobs to wanting merely jobs. We've got loans to pay and mouths to feed.

That all brings me to today's subject: perhaps the PGR is good at predicting the quality of philosophy one hopes to do. On average, the suggestion goes, products of higher-ranking PGR programs do better philosophy than products at lower-ranking PGR programs.

Now if you're thinking to yourself, "hey not-so-smart guy, you went to Fordham, which isn't very highly ranked, so clearly you've got an axe to grind!" lemme stop you right there: I have very little love for my alma mater for a wide variety of reasons. I have no desire whatsoever to budge Fordham's spot in the PGR.

Back to the matter at hand. First, the biggest issue: if we're going to say that PGR ranking predicts quality of philosophy that is likely to be done by graduates, we need some metric of quality of philosophy. As far as I know, the only one offered is the same for smut: you know it when you see it. So let's run with this for a second. We know good philosophy when we see it. Presumably this means something like, "in reading certain kinds of philosophy, we experience X," where 'X' is short for a set of positive of thoughts and feelings. And I'm sure we've all had this experience. Reading William James got me into philosophy in the first place, and I've gotten that feeling reading (in no particular order) Aristotle, Wittgenstein, Andy Clark, Susan Stebbing, Alva Noe, Jenny Saul, Alvin Goldman, Mary Midgley and Richard Menary, among many others.

The PGR, then, predicts the likelihood that graduates of more highly-ranked programs will write material that enables you to experience X than graduates of lower-ranked programs, provided your tastes are like those of the raters. 

On this view, the PGR works kind of like a ranking of vineyards or (what I'm more familiar with) breweries. Beer-ranking experts might say that products of brewery A are superior to products of brewery B. On this view, the best way to think about the PGR is as a taste-guide for consumers of philosophy: experts agree that the philosophy of mind coming out of NYU's grads is superior to that coming out of Stanford's grads.

Fortunately, there's a relatively easy way to see if the PGR makes good predictions on this score. (Relative in principle, at least; we don't need anything like Twin Earth or LaPlace's Demon.) And this is a study that the APA should definitely fund. Pick some number of grads from schools at every tier of the PGR. (I think you could do this with the general ranking but it might work better with the specialties.) Commission them to write a short (~3k) paper of their choosing, but prepare it like they would peer-review: absolutely no identifying information. (In exchange, perhaps these papers could come out in a special issue of the Journal of the APA, or somehow of other compensate authors for their time.) Then, give these papers to other philosophers and have them guess the tier of school the author comes from. We can instruct raters that they are to pay attention to the "you know it when you see it" criterion for quality philosophy, since that's what we agreed to at the start of this investigation. If it turns out that raters are pretty accurate, then we might regard the PGR as a reliable guide to the quality of philosophy their grads produce. 

Now we began this by suggesting that "good philosophy" produces a kind of feeling. But what if that's the wrong metric? Maybe we rank "good philosophy" by some weighting of publications, citations, awards, grants, whatever. On this suggestion, higher-ranked PGR programs produce grads who produce more papers at higher-ranked venues that are more-often cited (and so on) and also win more grants than lower-ranked PGR programs. I think I have some beef with that definition of "good philosophy" but let's run with it for now. That's a (relatively) simple task that can be done by gathering publicly available data. (At least, I think most of it is publicly available.) So then we just need someone with the patience, financial support, and data analysis chops to gather publication info, citation rates, and grant-winner info (prob from NEH, NSF, and Templeton among others). These can be given weights, or a range of weights, and then we can see if the PGR makes good predictions.

Either way, the key is to remember: 
1. the PGR is a survey expressing preferences,
2. if the survey data is to be useful, it has to make predictions about grads, and
3. these predictions are testable.


0 Comments

While I'm on the usefulness of Web of Knowledge for grad students...

12/17/2019

0 Comments

 
You can use Web of Knowledge to get a handle on the most important papers in a field you're just starting to dabble in (or discover important papers that might have slipped under your radar).  From the last post we saw how to search Web of Knowledge. After you land on the page returning the citations for your query, sort by "Times Cited." (Sorting options are above your returned citations.) You'll see a little arrow next to "Times Cited" -- down means most-to-least cited (selecting "Times Cited" again reorders from least-to-most). Voila! You have a list of citations from most to least cited. Of course, there are limitations:

1. This does not include books. The citations from your query are only what's logged in Web of Science. Admittedly, this is more journals than you can shake a stick at (and most of which you've probably never heard of), but just know that books aren't included. 

2. Any journals that aren't logged in Web of Science aren't included. My hunch is that journals that aren't logged are super obscure and likely aren't the best place to get up to speed on essential papers within your subfield.

0 Comments
<<Previous
Forward>>

    About me

    I do mind and epistemology and have an irrational interest in data analysis and agent-based modeling. This blog is about job market analyses.

    Old
    stuff

    April 2025
    August 2024
    March 2024
    January 2024
    September 2023
    July 2023
    March 2023
    January 2023
    December 2021
    October 2021
    May 2021
    April 2021
    November 2020
    March 2020
    January 2020
    December 2019
    September 2019
    August 2019

    RSS Feed

Powered by Create your own unique website with customizable templates.