About: Truth about Universities - commentary   Sponge Permalink

An Entity of Type : owl:Thing, within Data Space : 134.155.108.49:8890 associated with source dataset(s)

Earlier this week, I was flipping through the latest issue of Monocle, the new magazine from Wallpaper* creatorTyler Brule. This cover story in this month’s issue is a ranking of the top 25 cities in the world. Being a devotee of rankings, I read it eagerly. The obvious answer is money – good rankings mean more donations and bad rankings mean less donations. But this is incomplete at best: inside the US, the top institutions in US News would likely have got the big donations anyway; outside the US, where philanthropy is less developed, rankings hardly make a difference. Enjoy the weekend.

AttributesValues
rdfs:label
  • Truth about Universities - commentary
rdfs:comment
  • Earlier this week, I was flipping through the latest issue of Monocle, the new magazine from Wallpaper* creatorTyler Brule. This cover story in this month’s issue is a ranking of the top 25 cities in the world. Being a devotee of rankings, I read it eagerly. The obvious answer is money – good rankings mean more donations and bad rankings mean less donations. But this is incomplete at best: inside the US, the top institutions in US News would likely have got the big donations anyway; outside the US, where philanthropy is less developed, rankings hardly make a difference. Enjoy the weekend.
dcterms:subject
abstract
  • Earlier this week, I was flipping through the latest issue of Monocle, the new magazine from Wallpaper* creatorTyler Brule. This cover story in this month’s issue is a ranking of the top 25 cities in the world. Being a devotee of rankings, I read it eagerly. The first thing I noticed, of course, was the massively unscientific nature of the methodology. Although there are allegedly some quantitative metrics involved, basically it’s an entirely subjective set of rankings which nevertheless roughly captures some basic prejudices of the cosmopolitan class: if your city has good architecture, lots of “creative class” jobs, decent urban planning and transportation infrastructure and nice shopping, you come out top. If not, not. We could argue a bit about the relative merits of Copenhagen (1st place) and Vancouver (8th), but there’s no denying that it’s a plausible list that gives rise to some interesting and useful discussion about what makes a city great. It made me wonder why universities can’t have discussions like that. Nobody, for instance, takes US News and World Report or Maclean’s as a starting point in which to discuss what makes a good university; more commonly, it is taken as an opportunity to dogpile on the editors and tell them why they are wrong. Introspection takes a back seat. It’s easy to blame the rankers, but the institutions actually have the rankers coming and going. If they were to do as Monocle does and simply name a bunch of institutions that by common consent seem to be pretty good, they’d be assailed for being unscientific. As it is, by trying to make everything explicitly quantitative, they get accused of trying to oversimplify the educational process. They can’t win. Now, this is not to absolve the rankers their many sins; a distressing number of them seem to actually believe that their figures reveal some absolute truth about the relative quality of institutions, instead of being primarily reflections of some deeper issues like money, age, and size. But do you ever see mayors and city managers sound off at The Economist or Monocle for methodological errors the way university Presidents do about US News? No. And therein lies an intriguing difference. Exactly why are institutions so sensitive about rankings? The obvious answer is money – good rankings mean more donations and bad rankings mean less donations. But this is incomplete at best: inside the US, the top institutions in US News would likely have got the big donations anyway; outside the US, where philanthropy is less developed, rankings hardly make a difference. So if not money, then what? The answer simply is prestige. Prestige is the coin of the academic realm. We can sort of measure it through things like bibliometrics, but academics don’t really need those kinds of things to know who’s doing well and who’s not in the profession. Just from reading each other’s work (and from reading reviews of each other’s work), an informal pecking order develops naturally. Here’s where it gets interesting, though. While prestige is a totally natural method of determining hierarchy among academics, using prestige to create a hierarchy of institutions is widely condemned as being totally invalid (indeed, it is the prestige rankings that has been the focus of Presidents’ discontent in the US). Now, it would be easy enough to point out this hypocrisy and laugh, but I’m convinced this seeming double-standard actually points to something serious about universities themselves. If prestige is OK for academics, why isn’t it good for universities? The answer, fundamentally, is that within the academy universities themselves are not seen as valid units of analysis. Sound bizarre? Well from a historical perspective, not really. Go back just two centuries and universities were little more than marketplaces which hosted faculty who charged students directly for their services. The idea that the institution itself had a corporate identity or reputation separate from the faculty really dates only from the mid-nineteenth century. But even now, institutions are not considered single units. They are aggregations of hundreds or even thousands of faculty members. And these faculty members are not, for the most part, loyal to the institutions they serve. Rather, they are loyal to their disciplines, to their mentors, teachers and colleagues who try to advance knowledge in the same field. They are loyal to each other because they have shared long years reading the same material, developing the same habits of mind, and working on the same problems. Members of each of these fraternities feel far more kinship to one another than any of them do to other faculty members in other intellectual domains at their own universities with whom they might share nothing more than a cafeteria. Seen this way, each institution is nothing more than a collection of local chapters of international intellectual fraternities. The quality of each chapter at each institution is more or less independent of the quality of any other chapter at the same institution, except to the extent that financial muscle can attract better quality across the board. Nobody likes to come out and say this, of course. Part of the success of universities as a corporate form has been due to their ability to convince governments that the institution itself matters, that education which takes place across many disciplines has real value and that this value can be ascertained and formally acknowledged via the conferral of a degree. Admitting that this is to a considerable degree a charade and that faculty only put up with this pan-institutional stuff so they can get a pay-check and get on with their real vocation of advancing knowledge in their chosen field would cause serious problems with most institutional stakeholders. And yet rankings insist on taking institutions at face value and rating them as a single organization. It’s perfectly infuriating and yet perfectly reasonable at the same time. And it explains a lot of the heat around rankings. Enjoy the weekend.
Alternative Linked Data Views: ODE     Raw Data in: CXML | CSV | RDF ( N-Triples N3/Turtle JSON XML ) | OData ( Atom JSON ) | Microdata ( JSON HTML) | JSON-LD    About   
This material is Open Knowledge   W3C Semantic Web Technology [RDF Data] Valid XHTML + RDFa
OpenLink Virtuoso version 07.20.3217, on Linux (x86_64-pc-linux-gnu), Standard Edition
Data on this page belongs to its respective rights holders.
Virtuoso Faceted Browser Copyright © 2009-2012 OpenLink Software