The shrill cries of “We’re number one, we’re number one!” ring out every autumn across Canadian university campuses. Normally the reserve of triumphant varsity teams, this wild jubilation is, more than ever, caused by overexcited university administrators celebrating success in university rankings.

Love them or hate them—usually depending on an institution’s faring—university rankings are a phenomenon that pronounce on the quality of the academic and student experience around the globe. According to Washington, D.C.-based Institute for Higher Education Policy , more than 30 nations now engage in some form of rankings that are regularly published.

In any discussion of university marketing efforts, the importance of university rankings cannot be overlooked or underestimated. Rankings have a distinct impact on the reputation of universities, and a university’s reputation influences crucial audiences, such as prospective students, faculty, donors, alumni, and even governments.

Yet, are university rankings credible? Is it possible to describe the state of a university by one single number? The intuitive and rational response is “No.” Universities are dynamic and complex organizations that cannot be reduced to a single score for quality in teaching, research, and learning enterprises. There is a tremendous amount of work providing substantive critiques of university rankings. In sum, university rankings are criticized for the questionable relevance of measurements chosen, the methods by which data are collected, the scoring of each measurement, and the subjective weighting given to each measurement that aggregate to a final score.

Despite criticism over their methodology, university rankings thrive and create a public pronouncement on the perceived quality of an institution. For university marketers, rankings provide a framework for either a positive discussion point about a university, or an embarrassing black spot that must be endured or overcome.

Rankings narrow the discussion about a university, when marketing efforts should be focussed instead on developing a far ranging understanding of what the university offers through its distinct mission. York University’s Chief Marketing Officer Richard Fisher sums up the objective for university marketers as: “What branding can do for a university is differentiate it and elevate it, creating a destination instead of a commodity.”

And yet, university rankings are purposefully designed to create a commoditized view of universities by sustaining the impression that simple comparisons among institutions can be made in a like-for-like manner.

To understand the limitations of university rankings, it is informative to look at two examples: first, how a seemingly important measurement can be, and is, distorted within university rankings; and second, how the widespread use of opinion research within rankings provide data that have limited credibility

Student/faculty ratios are one of most common measures of quality in rankings. The ratios are used primarily as a measure of the quality of the student experience.

What is clear is that a student/faculty ratio is an input measure and can only be assumed as a proxy for an assessment of quality. A low student/faculty ratio does not necessarily translate into smaller classes or students having a chance to interact with faculty more frequently than at institutions with higher ratios. Furthermore, the institutional measure is an aggregation from programs across the university and will not necessarily reflect the experience of each department, discipline, or even an undergraduate as compared with a graduate experience.

Universities are dynamic and complex organizations that cannot be reduced a single score for quality in teaching, research, and learning enterprises.

Even after the limits of using student/faculty ratios are set aside, one would assume that the actual counting of faculty and students would be easy enough, but this is not the case. The University of Toronto has published an online critique of student/faculty ratios wherein the challenge of accurately and consistently counting students and faculty are exposed.

The key challenge appears to be the use and interpretation of common definitions. When counting students, should an institution only count registered students pursuing degrees? Or is it more meaningful for the institution to count individuals pursuing diplomas and certificates, those taking continuing education courses, or even postgraduate medical trainees?

Counting faculty is even more problematic. Again, as the Toronto critique points out, there are many different categories of academic appointments and many ways to count them. Faculty can be categorized by appointment status (e.g. tenure-stream, teaching-stream, short-term contract, adjunct), by rank (e.g. assistant, associate, and full professors), by time commitment (full-time, part-time), by job description (e.g. research scientists, clinical faculty), or by salary source (university or affiliated institution).

As one looks at the range of academic appointments, the challenge of assessing and counting each individual faculty member’s contribution to teaching and learning becomes daunting, if not impossible.

Turning to one example, one sees these definitional challenges in the Times Higher Education Supplement (THES) “World University Rankings” in how it reports student/ faculty ratios and the wide disparity in results if consistent definitions and interpretations are not used by institutions.

The 2008 THES rankings contain three Canadian universities in the top 100 worldwide: McGill (20), University of British Columbia (34), and University of Toronto (41). But a comparison of student/faculty ratios among these three schools is quite surprising. McGill has a student/faculty ratio of 6.0, compared to UBC’s 9.8 and Toronto’s 27.5. For comparison purposes it has been necessary to use published headcounts as THES does not provide full-time equivalent information for all universities.

Obviously, Toronto and McGill used significantly different interpretations, which produced a result that shows McGill employing more than four-and-a-half times the faculty per student than Toronto. This result is simply not credible. It clearly demonstrates that the lack of common interpretations—or any auditing and vetting by the THES editors—has reduced the value of published student/faculty ratios within the THES to virtual meaninglessness.

To underscore the impact of definitions, these same three institutions have published student/faculty ratios using the Maclean’s methodology. McGill’s ratio increases its THES result by 176 per cent to reach 16.6, UBC moves up to 16, and Toronto improves to 24.9.

The point of the above is not to ascertain a superior method of evaluating student/faculty ratios. Rather, it is meant to caution that different approaches to definitions, assumptions, and collection methods can contribute to a range of results that can make a large, urban university look like it rivals the class sizes at Aristotle’s Lyceum.

Many university rankings also employ opinion surveys. These surveys are usually given significant weight in the overall ranking score. For example, the THES provides a weighting of 40 per cent to its academic peer review and an additional 10 per cent to its employer survey

It is evident that the THES goes to great lengths to build a robust academic survey that prevents the selection of a home institution and also weeds out any discipline bias in the event that an oversampling of natural scientists from Australasia occurs. Furthermore, there are questions designed to corral respondents into the most appropriate academic sphere, so that engineers are responding to their knowledge of engineering schools rather than schools with Slavic language programs. Yet, it is important to point out that this survey screen is selfselecting. There are no benchmarks—it relies solely on how the respondent views their individual knowledge.

And yet, even after screening respondents for discipline and regional appropriateness, the survey still sets out a challenging task when it asks respondent to assess up to 10 other regional universities. By way of example, let us assume that a respondent has identified that they have regional knowledge of Canada and that they select themselves as having broad knowledge in the arts and humanities with specific subject area knowledge in history and French.

The respondent will then be provided with a list of 46 Canadian universities and be asked to select up to 10 universities the respondent regards as producing the best research in the arts and humanities.

The survey provides no criteria for how one defines “best research.” That interpretation is left to the individual respondent, which, of course, undermines the consistency behind how the question is answered. Furthermore, the survey requires no consistency of knowledge that the respondent has for each of the selections. At best, respondents will be in a position to assess the contributions of academics at other universities in a shared discipline. Yet, how can that relatively narrow assessment be extrapolated to pronounce on the quality of a department, or a school, or the overall capabilities of the institution to deliver a robust research environment?

The above two examples are presented not as a decisive methodological critique of rankings—far from it. Rather, they are meant to demonstrate that a high degree of caution should meet any claim of rankings as a credible source of objective and consistent information to evaluate institutions.

One would assume that the methodological problems of university rankings would lead to a great degree of resistance from universities. Although many university presidents take pains to distance themselves from rankings, there is still a great deal of effort on the part of universities to use rankings as marketing platforms.

Many universities have implicitly endorsed rankings by trumpeting rankings results through press releases and other communications vehicles, such as letters to alumni. Some universities have adopted rankings results into broader awareness-building campaigns. One such institution is the University of Guelph, where an aggressive print advertisement campaign has been underway for several years to raise awareness of the institution.

According to Guelph’s director of public affairs and communications, Chuck Cunningham, the campaign was designed to raise awareness among influencers, such as business leaders, about the university’s research and teaching strengths. Cunningham credits the campaign with boosting awareness of the university, as evidenced by several factors, including increased applications.

Guelph’s campaign included ads highlighting the university’s number one ranking among comprehensive universities in Maclean’s magazine. Interestingly, Guelph was undeterred when it fell out of first place in the Maclean s rankings. The university changed its advertising copy to cite instead a ranking produced by Research InfoSource Inc. which ranked Guelph first in research among comprehensive universities. Research InfoSource restricts its measures to financial inputs and research outputs, and is, therefore, quite different than Maclean’s. Clearly, the imperative for Guelph was to ensure that it could claim a number one rank to mesh with the thrust of its advertising.

Even if a university wished to, it is difficult to ignore rankings, as they appear to influence some segments of students. According to a U.S. study, the year after a school fell in the rankings, the percentage of the applicants it accepted increased, but it received fewer acceptances from its admitted students. Furthermore, its entering students’ SAT scores fell.

The impact of rankings has led to decisions at universities specifically designed to improve rankings performance. Some universities discount tuition and boost financial aid to attract better students, with the aim of improving rankings scores that favour higher entering grade point averages. Other universities create two-tier MBA and law programs so that students with low test scores are placed into part-time programs, while those with high scores are enrolled in fulltime programs that are counted in certain rankings.

Such efforts that could disadvantage students are troubling and have inspired new inquiries. The Institute for Higher Education Policy has sponsored a series of research projects to assess the impact rankings have on university decision-making among American schools.

Furthermore, efforts are underway to try to improve and refine rankings. In 2006, the International Rankings Expert Group undertook to develop the Berlin Principles—a set of good ranking practices that will promote greater accountability for the quality of data collection, methodology, and dissemination among the publishers of university rankings.

While this attempt to drive greater accountability and transparency for university rankings might be laudable, there are questions as to its probable success.

First, many rankings publications are conducted by private sector organizations that are motivated to sell copies of—and sell advertising within—their rankings issues. It is doubtful that a Canadian magazine like Maclean’s will be motivated to ensure its publication meets criteria established by an international body—especially if the criteria seek to dilute the view of universities as commodities that can be evaluated in a simple manner.

Second, the 16 Berlin Principles are not prescriptive. The principles seek to motivate rankings’ publishers to pursue greater clarity and rigour in the construction of rankings. The Berlin Principles do not set out clear guidelines on appropriate measures, collection methods, or data aggregation from which one could decisively criticize a particular ranking.

Although a great deal of criticism has been, and will continue to be, levelled at rankings, it is highly doubtful they will be leaving the stage soon. University rankings are big business for the private sector publishers, who sell millions of copies of their respective issues and guidebooks. So long as a demand from prospective students exists, university rankings will be published. In addition, the Faustian bargain that many universities have made adds a veneer of legitimacy to many rankings that is probably unwarranted.

In the end, it comes down to the ultimate “consumer” of rankings, the prospective student, to assess what is being measured and whether those elements are of any relevance to her or his individual needs, values, and goals. AM

David Scott is a member of the Advisory Committee to Academic Matters and an unranked communications consultant living in Toronto.