Breakdown of Programs by Specialties

Breakdown of Programs by Specialties

The rankings of programs in the specialty areas are based on surveys by experts in those specialties. Because, in many cases, the ratings reflect the presence of only one or two faculty in a department, the Advisory Board decided that we would not publish the precise scores. Programs are placed in “groupings” based on the rounded mean (rounded to the nearest 0.5 — so, e.g., 2.75 rounds up to 3, and 3.24 rounds down to 3). Next to each grouping, you will find the rounded mean for that group; next to the name of each program within that group you will find the median score for that faculty in parentheses, and then the mode score: where the mode and median are higher or lower than the mean, it is probably safe to assume that there was some notable divergence of opinion among evaluators. (Where there was more than one mode, both are listed.) Within a grouping, programs are listed alphabetically. Only programs with a rounded mean of “3” (meaning “Good”) or higher are so grouped. (In order to increase the pool of faculties a student should consider, any school with a mean of 2.5 or higher and either a median or mode of 3 was also rounded up to “3” and listed.)

After the surveys, members of the Advisory Board identifies faculties not evaluated this year but with strength in a specialty; these programs are listed as “recommended for consideration” as well after the ranked listing of formally evaluated programs.

The purpose of the specialty rankings is to identify programs in particular fields that a student should investigate for himself or herself. Because of the relatively small number of raters in each specialization, students are urged not to assign much weight at all to small differences (e.g., being in Group 2 versus Group 3). More evaluators in the pool might well have resulted in changes of 0.5 in rounded mean in either direction; this is especially likely where the median score is either above or below the norm for the grouping. Also bear in mind that (1) programs with more faculty specializing in an area tended to be rated more highly than those with just one philosopher in the field; and (2) programs with specialists on the regular full-time faculty rather than “cognates” or part-time faculty tend to be rated more highly in the field.

The lines between the specialty categories are not always hard-and-fast. What one philosopher might call an issue in philosophy of language, another might call an issue in philosophical logic or philosophy of mind. Students might look at the useful Blackwell Companions to Philosophy, or the equally valuable (albeit less detailed) Oxford volumes (ed. Grayling) on Philosophy: A Guide Through the Subject, to get some sense of how the fields are customarily demarcated.

It is worth noting that the results were checked for evidence of strategic voting; there was none. Evaluators were admirably responsible and honest in their assessments, and there were fairly high levels of consensus on the strengths of the faculties among the evaluators who completed the surveys.

Results are grouped into four broad areas, reflecting conventional demarcations: “Metaphysics and Epistemology”; “Philosophy of the Sciences and Mathematics”; “Theory of Value”; “History of Philosophy”; “Other”.

Follow faculty moves subsequent to the survey by visiting the Leiter Reports “Philosophy Updates” link.

 Evaluators were asked not to evaluate either their own department or the department from which they received their highest degree (PhD, DPhil, sometimes the BPhil) and were told that such ratings would be removed. All instances of improper or stray ratings were removed from the survey data. Additionally, evaluators were asked not to evaluate departments in areas where they not competent. Stray ratings and ratings in areas where evaluators were judged as not competent by the advisory board were removed. Every evaluator for each specialty area is listed.