Editorial Matters: The slow erosion of public university funding

Ontario’s public universities are vital institutions that deliver education to thousands of students, produce thought-provoking and groundbreaking research, and provide good jobs that support many diverse communities.

The province’s vibrant and renowned public postsecondary education system has been evolving for over a century. Core to its development has been a foundation of robust public funding delivered primarily through the provincial government.

Unfortunately, that bedrock of public financial support has been eroding for years, both on a per-student basis and as a share of university operating revenues. Since 2008, Ontario has ranked last among Canadian provinces in per-student funding and, for the first time in more than sixty years, tuition fees now account for more than half of university operating revenues.

A failure to maintain adequate levels of public funding threatens the quality of education and research provided by our universities. This approach inevitably shifts their activities to align with other sources of revenue (fundraising from private sources, higher fees from both domestic and international students, commercialized research) and creates pressure to reduce expenses (more students per class, higher faculty workloads, more contract faculty).

The government’s approach to university funding has profound implications for the student experience and research contributions. A government that makes university funding a priority and maintains a high level of public investment is not just investing in institutions and educational outcomes, but in people, their communities, and our collective future.

This spring’s provincial election campaign presents a valuable opportunity to discuss these challenges. It’s a lot to cover, but in this issue of Academic Matters we explore why public funding is so important for our universities and how we can work together to make funding postsecondary education a priority for the next government.

Graham Cox elaborates on the essential role universities play in our society and how public funding is vital for them to effectively fulfill their mandates. He explores the structure of the funding model changes being proposed by the government and how they will impact postsecondary education in the province.

Gyllian Phillips addresses the stagnation of full-time faculty hiring at Ontario’s universities during a period in which student enrolment has increased dramatically. She suggests how the government should be investing in a robust faculty renewal strategy.

Jeff Noonan discusses the importance of publicly funded basic research. He notes that this research continues to play second fiddle to research linked to short-term commercial profits (much of which is actually publicly funded), and how this approach undermines innovation.

Reflecting on a poll commissioned by OCUFA, André Turcotte and Heather Scott-Marshall describe some of their findings and provide some thoughts on how postsecondary education can become a more prominent issue in current and future provincial elections.

Nour Alideeb recounts her experiences navigating campus conflicts as a student and illustrates the dangers of allowing students and faculty to be pitted against each other. She highlights the benefits of university students, faculty, and staff coming together to build alliances that advance their priorities, locally and provincially.

In a special two-page spread, we illustrate the composition of Ontario university funding over the decades, showing how events like World War II, changes in federal-provincial relationships, and tuition fee policies have impacted the makeup of university funding.

Finally, the always funny Steve Penfold returns with a new edition of his Humour Matters column.

There are many important considerations when it comes to postsecondary funding— how it is informed by government priorities, how it informs university priorities, and how students, faculty, and staff can use their collective power to influence those priorities. We have only been able to explore some of these questions, but this issue of Academic Matters serves as a reminder of why public university funding is a vital investment for the future of Ontario.

We hope you enjoy reading this issue as much as we enjoyed putting it together. We think it’s an important one. As always, we love to hear your thoughts. A reminder that every article in this issue, and many more, are available on our website: AcademicMatters.ca. Thanks for reading. AM

Ben Lewis is the Editor-in-Chief of Academic Matters and Communications Lead for OCUFA.

The case for publicly funded universities

Ontario’s universities are important public spaces that depend on robust public funding to thrive. When the structure of the funding model changes, how does that impact the fundamental mandate of the university?

Universities are complex public systems embedded in the heart of our communities. By sheer size, they can be larger than smaller municipalities, have more physical infrastructure than a city, and provide a comprehensive array of public services.

All told, Canadian public universities are massive employers of students, teachers, researchers, librarians, academic and research support technicians, academic support workers (custodians, building services, food services, grounds and building maintenance), apprentices, counselors, utility workers, administrators, clerical workers, bartenders, security guards, and parking staff. Together, all of these workers maintain a space that fosters the advancement and dissemination of knowledge.

A functioning university system should provide inclusive spaces, welcoming to the broader community. Academics need supportive environments so they can ask the hard questions required to advance academic (and social) interests. Students depend on these supportive environments to develop and expand their understanding of themselves and the world—and sometimes even the universe—around them.

A functioning university system should provide inclusive spaces, welcoming to the broader community.

In Ontario, as in many other jurisdictions, public universities provide a distinct and important academic experience. Unlike many K-12 or trades colleges, a university education is supposed to provide an organic process that immerses students within an active process of advanced research, analysis, and discovery, not just routine memorization.

Academic research is the foundation for the rest of the research community (primarily state and industry research) and allows for the development of many of the scientific and cultural advancements produced for the public.

Unfortunately, the Ontario government has been neglecting the university as a space for true academic work for years. (Neo)Liberal government funding policies have ignored the fundamental importance of the academy and its unique role in advancing knowledge for the benefit of society. Successive governments have introduced policy that prioritizes outcomes that can be commercialized. This approach negates the historic role of the academy—one in which the search for knowledge has inherent value for society as a whole and not just the narrow commercial interest.

In the latest round of university funding model changes, the government has re-imagined the funding formula as a tool to further corporate trends. The structure of the formula compels universities to shift their priorities and resources to reflect current fads in management policy and short-term labour market goals. This new model is focused on cost minimization, commercial research subsidization, and skills development for new workers to support profit generation at “Ontario” companies. The essential and unique experience of academic research and study as a space for curiosity-driven knowledge generation has been all but abandoned—except in promotional rhetoric.

Remaking the Ontario funding framework

Funding priorities set by government and university administrators have far-reaching impacts on the form, function, and focus of academic programs. If funding is focused on basic research, students learn in a supportive environment where free thought reigns. If funding is focused, as it increasingly has been, on the short-term exploitation of research results for profit, then students learn in an environment where true academic freedom is discouraged if it does not advance those goals.

Universities should provide an environment where students are taught how to think critically and creatively, not focus on teaching a narrow set of skills currently deemed to be in high demand in the workforce. Students need to develop methods of critical analysis, so that they are equipped to begin trying to solve some of society’s more complex problems. Those who defend the academy understand its social and economic benefit—that it produces the minds and knowledge able to deal with the future’s known (and as yet unknown) problems and invent needed solutions. This requires prioritizing funding basic, curiosity-driven research.

Why then is the government changing the model through which Ontario’s universities are funded? Their motivation seems to be embedded in two pieces of rhetoric:
1) that the existing structures of funding were old and
2) that the existing structures of funding were overly complicated. Given that universities themselves are old and complex, this hardly seems to justify such a fundamental shift. But, if one digs deeper, the true motivation is revealed: reforming universities so that they run more like corpo-rations and are structured to prioritize the interests of for-profit businesses—instead of as public services.

The current provincial Liberal government’s attempts to shoe-horn market-based indicators into a public funding program simply furthers the marketization of access of the university system started by the federal Liberal government in the 1990s. The result of federal reforms—and their provincial knock-on effects—have led to sky-rocketing tuition fees, an undermining of basic academic research, an under-valuing of the social sciences and humanities, an explicit focus on commercialization of (even core) research programs, and the promotion of short-term “entrepreneurial” values among students. In short, this represents the reforming of the university from an academy for the advancement of thought and understanding to one focused on supporting private commercial interests.

The goal has been clearly outlined in policy papers. Publicly, however, the government continues to argue that the existing funding model was just simply “complicated” and “old”.

Enrolment as a metric for funding

The numerous new metrics the government has introduced into the funding model are inspired by the profit-driven metrics so popular in the private sector. If the reforms were truly about advancing access to education and properly funding universities in the province, then funding would not be assessed at the institutional level through the tweaking of Strategic Mandate Agreements that are drafted and signed by administrators. Instead, funding would be tailored to supporting students and faculty. After all, education and research are the whole point of universities and it is students and faculty who actually do those things, not administrators.

As complex as the funding mechanism is, the base funding for an educational institution, regardless of its particular mandate, is not hard to calculate. An institution is allocated a proportion of available operating funding primarily on the basis of its student enrolment. In most cases, the source of fiscal challenges faced by our institutions, isn’t the complexity of the model, but simply that universities are woefully underfunded. Inadequate public funding leaves institutions struggling to fulfill their mandates to provide support to all those who come through their doors. This situation is exacerbated as more money is diverted to the new entrepreneurial objectives of neoliberal governments and business.

Quality as a metric for the commodification and marketization of education

The metrics being put forward by the government as part of the new funding framework undermine the ability of universities to serve the public. Funding should be structured, first, so that it provides universal access for the public (who funds it through their taxes), and, second, to ensure that all who have access are provided with the highest quality education possible. Maximizing “quality” without prioritizing universal access is, in itself, a marketization of the academy. It means that students from working class or socially marginalized communities who cannot afford a university education cannot get one.

The metrics being put forward by the government as part of the new funding framework undermine the ability of universities to serve the public.

As an example, consider a comparison of the US and Canada when it comes to healthcare. Private healthcare markets are concerned with the maximization of quality (regardless of cost) as they compete with others in the private marketplace for their products. The result is some very high-quality healthcare that is out of reach for a majority of the US public. Alternatively, the Canadian healthcare system offers the best quality service it can while providing universal access that attempts to address the healthcare needs of all Canadians. This means sacrificing what the market would identify as the highest quality option, such as assigning a single doctor to each patient. These different incentives result in fundamentally different structures of service provision.

University research and education is a significant expenditure (public and private). As such, the broader public should be involved in a conversation about what role society wants the academy to play and how those desires align with the long-term implications of proposed funding structures. The broader social and economic impacts of the academy are far too important to allow business leaders and corporate ideologues in government drive the process. Unchallenged, these policies will transform universities into “colleges with research arms” and divorce the practice of teaching from research, thereby undermining both.

This quasi-professionalization of the development of academic policy is the de-democratization of the academy and its social mission. The commercialization of university research, the focus on entrepreneurial development, and the prioritization of STEM over the social sciences are the natural result of this approach.

A need for consultation

If the goal is to build an academy that will serve students today and advance social and economic prospects in the future, we should start where academic researchers would start: detailed critique, broad education and consultation, and proposals debated by knowledgeable peers. Only then would policy makers be properly equipped with the understanding needed to implement changes to university funding.

The development of metrics that support the corporate orientation of new university programs purposely ignore broader social impacts. As policy researchers know, social impacts are not as easily measured as commercial profit-margins or sector-specific hiring numbers. The current metrics put forward by the liberal government should be abandoned and replaced with measures of access and academic output that form the core mission at the heart of all public universities.

The marketization of university research has also led to the increased casualization of both academic and academic support work, the declining social value of degrees in the humanities, and the corporatization of the university. Left unmeasured is the loss of the important social innovation that academic work generates. The de-commodification of university access and research will help reverse the casualization of teaching and learning environments.

To build an inclusive and supportive environment for all workers and students at Ontario’s universities, the government should not only eliminate university access fees, but provide robust and sustainable base funding to institutions and allow for research funding to be set and administrated at the peer level. AM

Graham Cox is a researcher at the Canadian Union of Public Employees who has been doing research for student and labour movement organizations in the academic sector for over 15 years.

It’s time to invest in a faculty renewal strategy for Ontario’s universities

For years, full-time faculty hiring has stagnated at Ontario’s universities, even as student enrolment has increased dramatically. It’s time for the government to invest in a robust faculty renewal strategy.

Public funding is foundational for a postsecondary system that provides accessible, quality education to students from all socioeconomic backgrounds. While recent efforts have increased accessibility to postsecondary education through a refinement of the Ontario Student Grant (OSG), over the past decade, Ontario has been losing ground to the rest of the country when it comes to funding our universities. On a per-student basis, Ontario’s university funding levels are 35 per cent lower than the Canadian average, and we have ranked last in per-student funding for over eight years. This trend cannot continue. It’s time for government commitment to re-investing in Ontario’s universities.

Continued underfunding has left Ontario with the highest student-faculty ratio in the country, resulting in dramatically larger class sizes. In the last decade, Ontario university student enrolment has grown seven times faster than full-time faculty hiring. As a result, there are now 31 university students for every full-time faculty member, far surpassing the rest-of-Canada average of 22 to 1. The increasing student-faculty ratio has drastic implications for the overall quality of education and student experience at our universities.

I am fortunate that I hold a tenured position at a small university in a program with relatively small class sizes. I know first-hand how engaged students are with their education when I am able to appreciate them as individuals, respond to their different ways of learning, and bring my research into the classroom every day.

The disparity between student enrolment and faculty hiring has impacted education quality by generating larger classes with less one-on-one student-faculty engagement. Among other concerns, this leads to fewer opportunities for mentorship and academic or career advising. Renewed public investment in full-time faculty hiring is integral to closing the gap between the number of students studying and faculty working on our campuses.

Every student’s learning experience and every university’s capacity to produce research relies on the faculty members who teach, research, and engage in their communities. The stagnation in public university funding and faculty hiring is putting a strain on our higher education system. Larger class sizes mean that faculty are increasingly facing time and capacity constraints. With only so much time in the day, faculty research is under threat. As research capacity becomes strained, Ontario’s knowledge economy will lose out on the most innovative ideas and developments. These exciting possibilities will also be lost to the students in our classrooms, as professors are less able to contribute to forward-thinking curriculum development.

This stagnation in full-time faculty hiring has paralleled the estimated doubling of courses taught by contract faculty at Ontario universities since 2000. Research by the Council of Ontario Universities suggests that 58 per cent of faculty are now working on contract. This growing reliance on precariously employed contract faculty is another of the consequences of the underfunding of Ontario’s postsecondary institutions. It has grave repercussions for the individuals working in these positions and for our public educational institutions more broadly.

The stagnation in public university funding and faculty hiring is putting a strain on our higher education system

Contract faculty are highly qualified and experienced teachers and researchers. Unfortunately, they lack job security, face unpredictable scheduling, and often juggle jobs at multiple institutions. Their working conditions make it difficult to provide students with one-on-one engagement and continuity throughout their degree program. This can have a significant impact on student learning outcomes, with some students choosing not to take the next course in a sequence or, more worryingly, not completing their programs. Moreover, contract faculty receive a fraction of the pay of their full-time counterparts for doing the same work. I think this is simply unfair and 87 per cent of Ontarians agree that contract faculty should receive the same pay for teaching the same courses as full-time faculty.

We currently stand at a point where precarious work is becoming the new norm in our institutions and our universities are engaging in labour practices that run counter to the public’s strong desire that their universities should be model employers. Instead of denying contract faculty fair pay, job security, or benefits, our publicly funded universities should embrace the values of equity and social justice so important in our communities and throughout postsecondary education.

Moving forward, both the provincial government and individual universities need to invest in a faculty renewal strategy that begins reversing these worrying trends—trends that raise class sizes, increase precarious work, and threaten education quality. This strategy should include measures that provide pathways for converting more contract faculty into full-time, tenured positions. Such an initiative is strongly supported by Ontarians, 85 per cent of whom believe that contract faculty should be offered full-time positions before more contract faculty are hired. This strategy would improve faculty working conditions and, in doing so, improve student learning conditions.

Levels of investment in faculty renewal should support enough full-time faculty hiring to deliver substantive improvements in province-wide student-faculty ratios. OCUFA estimates that an investment of $480 million over the next three years would support the creation of over 3,300 full-time tenure-stream positions, improve the student-faculty ratio by a modest margin, and bring Ontario substantially closer to matching the rest-of-Canada average.

This faculty renewal strategy must also help to ensure that retiring full-time tenured faculty members are replaced with new tenure-stream positions. Too often, when full-time faculty members retire, departments will turn to precariously employed contract faculty members to take over the teaching responsibilities, leaving the remaining full-time faculty members to pick up the slack on university service responsibilities. Again and again, we hear retiring professors express concern that the survival of their programs or departments will be jeopardized when they retire, and that the quality of their programs will decline without dedicated full-time faculty hired to replace them.

In sum, a robust faculty renewal strategy requires three pillars: hiring additional full-time faculty, replacing retiring full-time faculty, and supporting pathways for contract faculty into secure full-time positions.

With a provincial election on the horizon, supporting good academic jobs is a popular measure that candidates from all parties should be able to get behind. In fact, during the writing of this article, the Ontario NDP released their election platform in which they recognize the need to address precarious academic work and faculty renewal. Hopefully, the other political parties will take this opportunity to follow suit. Not only does the Ontario public overwhelmingly believe that universities should be model employers, but they understand that investing in better working conditions for faculty, including job security and benefits for contract professors, is an investment in education quality.

supporting good academic jobs is a popular measure that candidates from all parties should be able to get behind

For too long, Ontario’s faculty have struggled to figure out how to do more with less. Our students deserve better. Bolstered by much needed funding from the provincial government, faculty renewal would represent a vital investment in our campuses, our communities, and our students. AM

Gyllian Phillips is a Professor at Nipissing University and the President of the Ontario Confederation of University Faculty Associations.

What might the 2018 Ontario Budget mean for university faculty?

From the Spring 2018 Issue

Given the upcoming election, it is widely understood that the 2018 Ontario Budget is as much a campaign platform document as a budget. While the continued implementation of reforms to student assistance are expected to further improve access for students, faculty are concerned that operating funding for universities remains stagnant, threatening the high-quality education students expect and deserve.

The lack of increased funding to support the government’s stated goals of providing fairness for contract faculty and encouraging faculty renewal is disappointing. Operating funding over the next three years is now on track to decline slightly (by 0.1%), which, when adjusted for inflation and enrolment, amounts to an even larger reduction in funding. Ontario’s universities already receive the lowest per-student funding in Canada and this budget will leave our province further behind.

The changes to student assistance announced in 2016 continue to be implemented, with parental and spousal contributions reduced for next year. This will result in more students qualifying for the grants and loans they need to afford the cost of tuition fees, which continue to increase. Investments in access are welcome, but they must be matched with operating investments in quality that support improved student-faculty ratios, smaller class sizes, full-time faculty hiring, and fairness for contract faulty. Investments in quality were missing from this year’s budget.

A one-time “support quality programs and student outcomes” fund, including $32 million for universities and $125 million for colleges will be directed towards the implementation of new labour laws passed in Bill 148, Fair Workplaces, Better Jobs Act. In the university sector, we understand that it is expected to fund a portion of the cost of new minimum wage, vacation pay, and leave provisions, but no funding has been allocated to support the implementation of new equal pay provisions. It is also concerning that this funding has only been allocated for a single year, since supporting fair working conditions will require ongoing investment in Ontario’s universities.

Other ongoing initiatives noted in the budget, include continued support for eCampusOntario’s Open Textbooks Library, continued investments for mental health services, newly announced capital funding scheduled to begin in 2020-21, new experiential learning and labour market focused programming, and continued progress towards the proclamation of a French-language university.

This budget also included the expansion of OHIP+ prescription drug coverage to seniors, which will result in cost savings for benefit plans that provide drug coverage for employees over 65 or retirees. Through negotiations, this could lead to benefit improvements or premium reductions for faculty associations.

Overall, this budget leaves important faculty concerns unaddressed. Following the June 7th election, OCUFA will continue working with the new government to advocate for re-investments in universities to support improvements in per-student funding levels; establis h a more robust consultative process for Strategic Mandate Agreements; ensure core operating grants are not linked to performance metrics; establish funding to support fairness for contract faculty, including equal pay; and develop a faculty renewal strategy that supports full-time faculty hiring. AM

This article originally appeared in OCUFA Report.
To receive this report every week in your email, subscribe here:

Looking at the big picture: A breakdown of university funding in Ontario through the decades

From the Spring 2018 Issue

University revenues by student and government funding sources

Ontario student fees have accounted for a greater share of current university revenues (primarily operating and research) than provincial government grants in only two stretches over the course of nearly a century. The first was a five-year stretch beginning in 1945-46 when the federal Department of Veterans Affairs paid tuition fees on behalf of veterans. The second started five years ago in 2012-13.

A similar trend occurs when examining the percentage of combined revenue contributed by student fees compared to provincial and federal operating funding. In only two years have student fees outweighed the magnitude of funding from both the provincial and federal governments—in 1947-48 and again almost exactly seventy years later in 2016-17 in what could be the beginning of a new era.

About the data series

The data series is developed from two main sources:

  1. Data from 1920-21 to 1969-70 are derived from the Statistics Canada dataset on University education expenditures, by direct source of funds and type of expenditures, annual (dollars). The principal data on university revenues are based on the Financial Information of Universities and Colleges and predecessor surveys.
  2. Data from 1970-71 to present are based on the Financial Report of Ontario Universities produced by the Council of Ontario Financial Officers (COFO). COFO data are based on a sum of funds for operating, sponsored research, trust and special purpose, and net income/loss from ancillary enterprises. Capital funds and designated endowment income are excluded.

Federal funding in one form or another has long figured in the picture of university revenues, and has been included for three reasons. First, substantial university operating support was provided by Veterans Affairs supplementary grants and then through federal per capita grants until 1967-68, when federal funding was redirected to provincial governments to distribute. How much was directed to universities and colleges became purely notional once the federal government combined and reduced provincial transfers with the creation of the Canada Health and Social Transfer in 1996-97. Second, before 1960-61, Statistics Canada data on university revenues do not distinguish funds as being operating or assisted/sponsored research—research in addition to that already supported by operating funds. Finally, the proportions of federal and provincial government operating and sponsored research money for universities vary over time and between jurisdictions according to government policies and priorities. AM


Data sources
  • Statistics Canada, CANSIM table 478-0007 – University education expenditures, by direct source of funds and type of expenditures, annual (dollars) (accessed: December 16, 2013) for years 1945-46 to 1969-70. Data for intervening years 1946-67 to 1949-50 and 1951-52 to 1953-54 are drawn from Dominion Bureau of Statistics, Survey of higher education, 1946/48-1961/62, CS81-402-PDF. For 1946-47 and 1947-48, the federal total was reported: Geographic distribution was estimated by assuming it was the same as the average distribution for 1945-46 and 1948-49. Additional data drawn from Dominion Bureau of Statistics, Higher education in Canada, 1936/38-1944/46. CS81-402E-PDF for years 1920-21 to 1944-45.
  • Council of Ontario Finance Officers, Financial Report of Ontario Universities, for years 1970-71 to 2016-17 (Committee of Finance Officers, Revenue and Expenses of Ontario Universities before 1982-83).

The public value of public funding for research

Basic, curiosity-driven research continues to take a backseat to privately and publicly funded research linked to short-term commercial profit. We must push back against this trend and reinvest in the core mission of the university.

What machine has changed social life more than the networked computer? If we could go back in history to the point where computing technology was just emerging, armed with the knowledge of how essential computers have proven to be, who would not have invested in their development?

Now change the picture somewhat. From the current user’s perspective, the computer is part television, part calculator, part typewriter, part phone, and part stereo. We think of it as a multipurpose physical machine, attractively designed and marketed as an essential lifestyle and business device. However, the real heart of the machine beats beneath the design, the high-resolution display, the wiring, and the microprocessors—at the heart of every iteration of the computer has been the machine language that enables the computer to receive instructions from its user and process information.

Now, imagine that you are an investor looking at a messy chalkboard scrawled with the mathematical logical proofs that would one day become the foundation of computer machine language. This mathematical logic would have been, and still would be as indecipherable to most people as hieroglyphics were to Europeans before the discovery of the Rosetta Stone. Most likely, you would not have understood that you were looking at the future of communications and information processing, not to mention science and entertainment. You would have kept your money in your pocket and gone off to invest in the soybean futures market instead. Nothing against soybeans, but that would have been an unwise investment decision in the long run.

Short-term profit vs long-term knowledge generation

Private investors tend to only think in terms of the short run, the immediate pay-off that will come in the next business quarter or year. Human knowledge, by contrast, develops only over the long term. The step to the next plateau, the breakthrough, is often not visible; it sometimes depends on happy accidents, and almost always involves formal or informal cooperation and collaboration. Further, it often is not coextensive with the immediately useful or saleable. These conditions are opposed to the typical requirements upon which private investors would insist: predictable, low-risk returns; exclusive control over the product; secure intellectual property rights; and immediate marketability.

Private investors want some assurance that what they are investing in will pay off. Universities are institutions whose entire purpose is the production and dissemination of knowledge in the widest and deepest sense of the term, and they cannot fulfill these missions if they are forced to rely upon private funds. Private investors seek to grow their capital by funding the development of products and techniques that can be commodified and eventually sold. Whether one accepts the focus on the growth of money-value as ultimately important or not, it is incompatible with the growth of knowledge.

Even if we disregard every other social and human interest and think only about the importance of innovation to the growth of GDP (business leaders and politicians chant this mantra non-stop), these innovations will dry up unless basic scientific research is funded. Basic research—as the example of mathematical logic illustrates—often has no short term use-value, can be inscrutable to non-experts, has uncertain practical implications, and thus can appear as a waste of money from a perspective concerned with short-term gains. If researchers have to rely on nothing but private funds geared to short-term returns, basic research would soon become impossible and the impact would be devastating to the economy and GDP.

Whether one accepts the focus on the growth of money-value as ultimately important or not, it is incompatible with the growth of knowledge.

Venture capital pools (groups of investors that target emerging companies), to some extent, understand the problem of short termism and try to correct for it. Even if they are successful in overcoming that problem, two more decisive incompatibilities with private funding appear.

The market only cares about certain types of knowledge

The first incompatibility is that the set of issues worth investigating is not the same as the set of products that can be sold. Universities are not the appendages of commodity markets, but institutions that create and protect spaces where free inquiry and creation are possible—spaces unconstrained by factors extraneous to the problem being investigated. Without public funding, it would be impossible to pursue research in any field that could not prove itself useful in the crucible of business competition. This would rule out most social scientific and humanistic research, most artistic creations, and also much of the basic but important research done in the sciences. The market does not care whether string theory turns out to be true or about the weight of the Higgs boson. That, however, does not mean that those problems are not worth solving or will not prove vital to future innovation.

The second incompatibility is that the set of problems worth solving is not coextensive with the set of commodities the solution might potentially generate. A problem is important to solve when its solution will improve a crucial dimension of human life. Human beings are not simply bellies and wallets; we are questioning minds who want to know who we are, where we are, why we are here, whether it matters, and what we ought to do with our lives. These questions are philosophical, but their solution is not the province of philosophy alone. Natural and social sciences help explain where we are (what the origin and structure of the natural world is, what the history and structures of various social worlds are, and how we can keep ourselves healthy). Religion, art, and the humanities, broadly construed, offer answers to the question of why we are here and whether it matters. None of these answers is worth any money, but no sort of human life is imaginable without exploring these questions. No human culture over the past several thousand years has failed to pose them in one form or another. The belief that there is some sort of algorithmic solution that can provide final and definite answers to these questions will prove a peculiar but transient delusion of our capitalist technocratic culture.

Thus, work will have to go on in all the fields engaged by these problems. The history of the disciplines of human inquiry is formed from the criticism of attempts to solve these problems. Through the criticism of given answers, the natural and social sciences overcome constricted paradigms, humanistic research becomes conscious of exclusions and false constructions of others, and the arts overcomes derivative and moribund forms to find fresh ways to say what needs saying. Since public funding is not, in principle, answerable to any sectional or private interest, it is the best means to ensure that this history of critical inquiry continues.

Since public funding is not, in principle, answerable to any sectional or private interest, it is the best means to ensure that this history of critical inquiry continues.

Public funding for the public’s benefit

Unfortunately, public funding is often held hostage by sectional and private interests and so it is important to distinguish between the principle that underlies public funding and its source—the government. When public funding is tied to the short-term interests of governments seeking re-election, bogus metrics unconnected to the real mission of universities, military prerogatives, or short term business priorities, public funding becomes entangled in the same problems as private funding.

The investments announced in the most recent federal budget are uneven in this regard. On the one hand, the increases in base funding to the Tri-Councils is good news. The budget announced an investment of $925 million dollars over five years for NSERC, SSHRC, and CIHR. It also targeted investment in indigenous scholars, early career researchers, and interdisciplinary research. On the other hand, this funding is only just over half the amount recommended by Investing in Canada’s Future: Strengthening the Foundations of Canadian Research, a report commissioned by the Federal Government and delivered by former University of Toronto President David Naylor. The sort of “high-risk” interdisciplinary research the government wants to fund seems to privilege work that promises to lead to spectacular monetary rewards. Still, whatever criticisms might be made of the details, the federal budget certainly recognized the essential role that public funding plays, and will hopefully become a model that provincial governments follow.

Researchers, whatever their discipline, must insist that the values served by universities are public, as are their benefits. What are those values? First, the good of knowledge in and of itself as the (open-ended) satisfaction of the human need to understand ourselves and our world. Second, the good of criticism as (the open-ended) satisfaction of the need of suppressed voices, marginalized perspectives, and dissenting theories to be heard. Third, the good of creation and invention as the (open-ended) satisfaction of the human need to exercise our intellectual and practical capacities as the real substance of meaningful lives. Finally, the good of social and scientific change towards more comprehensive and coherent understanding and more enabling inclusiveness of social institutions, practices, and relations as the (open-ended) satisfaction of the human need for freedom.

High-quality research depends on robust and stable public funding

What does that type of public funding mean in practical reality? It means stable funding adequate to the intellectual purposes of the university. In turn, stable funding adequate to those purposes means, in the first instance, investment in full-time, tenure track faculty positions (both new positions, and converted positions for research-active long-serving contract academic staff). In the second instance, it means ensuring that universities can maintain the physical infrastructure research requires, including laboratories and real libraries. In the third instance, it means both adequate funding for granting agencies and regulations that ensure the pre-eminence of peer review and put aside extrinsic considerations about short-term usefulness or “knowledge mobilization” in the distribution of grants. There are, of course, well-known problems with peer review: methodological gate-keeping, refusal to acknowledge marginalized voices, and heterodox approaches. However, none of these problems will be solved by holding everyone hostage to private sector priorities or imagining that there is some magical algorithm that can determine funding priorities.

In order to ensure that these values are served, universities have to continue to press governments for adequate funds (not only targeted research funding but basic funding adequate to fulfill their mission). If universities cease to be truly public institutions because they depend on tuition fees for most of their revenue, then they can hardly be a strong voice for public research funding. Moreover, if universities continue to marginalize faculty voices on Boards of Governors, compromise collegial governance, and accept, without critique and resistance, the irrationality of the metrics-as-measure-of-excellence fad, they will fail in their mission to function as institutions of publicly-valuable research and higher education.

None of these goals can be achieved without professors, in alliance with students and other campus groups, working together to ensure that the historic mission of the university is respected and guides all institutional decisions, identify and overcome roadblocks to the coherent fulfillment of this mission, and push for the institutional changes necessary to keep universities connected with the changing social dynamics and problems of the public they serve. AM

Jeff Noonan is a Professor of Philosophy at Windsor University and President of the Windsor University Faculty Association.

What happened to the issue of postsecondary education?

Postsecondary education is an issue that affects a majority of Ontarians, but it does not often feature prominently in provincial elections. How might this issue be pushed onto the election agenda?

Education is an issue that affects a majority of Ontarians. Whether you are a parent with school-aged kids, a student currently enrolled in a school in Ontario, or an adult taking courses or contemplating taking courses to upgrade your skills, this is an issue that touches Ontarians’ lives. Despite this reality, issues related to education in general, and postsecondary education in particular, rarely dominate the discourse during provincial election campaigns. From time to time, ancillary issues like faith-based school funding make an appearance in Ontario elections—to detrimental effect. We have to go back to the double-cohort issue more than 15 years ago to find an election when education occupied a dominant place in the public debate. With another provincial election looming, is there any indication that postsecondary education can be pushed onto the election agenda?

To answer this question, we conducted a public opinion study with 2,001 Ontarians over the age of 15. Data was collected between January 22nd and February 4th, 2018. One guiding assumption of our analysis is that there is a link between electoral outcomes and public policy. Another important aspect guiding this analysis is that issues matter in influencing voters’ choice at election time. Understanding the link between vote choice and issues is too often limited to top-of-mind issues, however, less salient issues can also potentially have an impact on how voters make up their mind. Postsecondary education is one such issue.

The most important issues in Ontario

As is generally the case, issues related to education are not particularly salient in Ontario at present. When asked about the most important issue facing the province, 11 per cent of Ontarians mentioned health care, slightly ahead of jobs (10 per cent), balancing the budget (10 per cent) and corruption (8 per cent). Only 2 per cent mentioned education (Table 1).

However, when queried about concerns on an issue-by-issue basis, a slightly different picture emerges. Using a 0 to 10 scale where 0 is “not at all concerned” and 10 is “very concerned”, concern over the cost of university tuition fees in Ontario (6.5) is on par with issues such as the level of unemployment in Ontario (6.5) and the adequacy of funding for Ontario’s public services (6.64) and just behind the quality of employment in Ontario (7.1). More importantly, 75 per cent of Ontarians believe that the quality of university education should be a high priority for the provincial government in Ontario (Table 2).

The challenge facing those who want to push postsecondary education to the forefront of the election debate is to find ways to link postsecondary education to other issues so that politicians are more likely to pay attention. The study we conducted offers a few suggestions on how to achieve this objective.

75 per cent of Ontarians believe that the quality of university education should be a high priority for the provincial government in Ontario

Ontario’s universities make important contributions

There is high recognition and support for the importance of universities in our society. Specifically, a strong majority of Ontarians agree that: Ontario universities make an important contribution to the local economy (80 per cent); the scientific research undertaken by universities makes an important contribution to the provincial economy (77 per cent); universities provide students with a high-quality education (76 per cent); and, having a university degree is both more important now than it was to our parents’ generation (71 per cent) and a necessary asset in today’s world (69 per cent).

These point to a very receptive public opinion environment within which to engage in a discussion about the importance of postsecondary education. While there may not be immediate concern about postsecondary education as an election issue, the electorate can be primed to focus on it if linkages are made between the high value Ontarians place on postsecondary education and its potential impact on issues like jobs and the economy. This is reinforced by the fact that Ontarians clearly see room for improvement on this front since only 12 per cent think the quality of university education has improved over the last five years.

Ontarians trust university professors on issues of quality

Two more important dimensions need to be considered when looking at ways to increase the prominence of this issue during the upcoming election campaign. The first centres on finding trustworthy spokespeople. On this front, university professors (trusted by 75 per cent of Ontarians), student organizations (65 per cent), and university administrators (58 per cent) have a clear advantage over the Ontario government (41 per cent) and private sector companies (37 per cent). Findings are given in Table 3.

The second dimension to consider constitutes an obstacle that may be difficult to overcome. Despite the perceived importance of postsecondary education and the contribution it makes to our economy and society, none of the provincial parties are seen as being particularly trustworthy on this issue. When asked “which of the provincial political parties would do the best job at ensuring high-quality education at Ontario’s universities?”, “none of the above” is mentioned by a plurality of Ontarians (32 per cent), ahead of the Progressive Conservatives (24 per cent), the Liberals (23 per cent) and the NDP (21 per cent). Accordingly, the three main parties will likely perceive this issue as more of a minefield than as an issue that can be leveraged for electoral benefit. Only by linking postsecondary education to the other important election issues suggested above can we create a dynamic where political parties find it impossible to ignore such an important public policy domain. AM

André Turcotte is an Associate Professor in the School of Journalism and Communication at Carleton University. Heather Scott-Marshall is the President of Mission Research and an Adjunct Professor in the Dalla Lana School of Public Health at the University of Toronto.

Building Solidarity on Ontario’s university campuses

University administrations often seek to advance unpopular agendas by attempting to pit students and faculty against each other. Through campus alliances, we can develop stronger relationships that bolster our ability to advance our own priorities.


The CUPE 3902 strike in March of 2015 at the University of Toronto was my first exposure to some of the underlying issues in postsecondary education. At the time, I was a volunteer at my students’ union and recently elected as the Vice-President University Affairs and Academics. It was through my involvement in my students’ union that I began learning the realities faced by faculty, contract workers, and teaching assistants (TAs) while unlearning the myths so often propagated to pit students against academic workers.

Many thoughts raced through my mind as I joined concerned students to camp outside the principal’s office at the University of Toronto Mississauga to show our solidarity for the striking workers. The principal sat on the floor with us and began claiming that the situation was out of his hands. The students gathered were unimpressed with his attempts at shifting the blame, but it was easier for him to do that than actually deal with our concerns.

The four-week strike was an uncomfortable time to be a student on campus, as we maneuvered through picket lines to attend classes and continue our academic lives. Reflecting on it now, I never should have crossed the picket lines, but at the time I did not understand what such an action represented (a common struggle for undergraduate students). Tensions ran high as many students turned against their TAs, claiming that they were selfish, inconsiderate of student realities, and obligated to teach because students paid their salaries through tuition fees. It was hard to flip the narrative around to focus on the real issues at hand. Hearing some of these sentiments expressed during both last fall’s strike by college faculty and this spring’s strikes at York and Carleton makes me wince as I recall just how much similar misinformation was being circulated during those first years of my undergrad.

Looking back on the strike, I feel embarrassed that many of my classmates and colleagues who now work in precarious jobs expressed such vehement opposition to the actions taken by their TAs. At the time, we did not realize that their fight was our fight too.


Government cuts to postsecondary education funding have driven an overreliance on precarious, low-wage work, and skyrocketing tuition fees to balance institutional budgets. On campus, this results in contract faculty balancing multiple jobs at different institutions to make ends meet, and living with the uncertainty of whether or not they will have a job the following semester. Precarious, low-wage work means teaching assistants do not get paid for the additional time spent marking assignments or preparing content for students. It means that, despite students paying exorbitant tuition fees, the high-quality education promised by our institutions is being steadily eroded.

Students are no strangers to this reality. Many students juggle multiple part-time jobs, withdraw mortgage-sized loans only to pay them back with interest, and join a work force where precarious work is the norm. It’s a bleak future after investing so much time and money into a postsecondary education.

The anti-worker narrative that loomed over campuses across the province in 2014 and 2015 began to shift as more students were subjected to the same working conditions as their professors, contract faculty, and staff.

I was at a loss for words when I stood outside Queen’s Park with the thousands of college contract faculty unionized with the Ontario Public Sector Employees’ Union (OPSEU) who were on strike. Over those five weeks in October 2017, the number of students I saw on the picket lines, at actions, and on Facebook defending workers’ right to strike was heart-warming. Students took on the responsibility of explaining how, despite their absence at the bargaining table, workers were still fighting for students’ rights to a high-quality education and fair working conditions. It was then that I realized just how much had changed in the previous three years and I attribute this shift to the ground-shaking work of cross-campus solidarity groups. These grassroots organizing groups acknowledge and organize around the fact that students’ learning conditions are dependent on the working conditions of those teaching and working to keep postsecondary institutions running. This symbiotic relationship implies that harm done to one will inherently affect the other and vice versa—a victory for one is a victory for all.

These victories can only be achieved through unity and collective organizing, so the support of students and community members during campus strikes and labour disputes are integral. Students, staff, and faculty at York University have mastered this method of organizing and have set the standard for other groups across the province.

These victories can only be achieved through unity and collective organizing

The York Cross-Campus Alliance

York University has always been a hub for progressive organizing and often takes stances on issues that are considered trail-blazing in the sector. This has been achieved, in part, because community members are active in the university community’s cross-campus alliance. The alliance consists of faculty; undergraduate and graduate students; labour unions that represent TAs and RAs; and labour unions that represent support staff like food workers, janitorial workers, and groundskeepers. The cross-campus alliance tackles various issues affecting the community, including supporting collective agreement negotiations and working on initiatives to unionize other workers. It is the success of this cross-campus alliance that has resulted in continuous worker support during strikes, a vast majority of students at York campus support their striking TAs, and consistent pressure on the York administration. This is an environment that was missing for us during the University of Toronto strike in 2015.

Moving forward together

As stakeholders in a system that is often threatened by political shifts, students and faculty face an uphill battle in this provincial election—a battle that will continue long after the election is decided. Over the past few years, we have seen great improvements in labour laws and access to postsecondary education. However, we must continue the fight to protect what we have gained and strive for the marks that were missed. We are all in desperate need of a government that prioritizes funding for public education, but more than ever we need a shift in public discourse around the value of empowering people through higher learning.

It is crucial for students and faculty associations to develop strong relationships that are rooted in our commonalities and that push us to show up publicly for each other. There are many problems plaguing our education system. This may seem intimidating, but it is an opportunity to collectively organize on a variety of issues, ranging from workers’ rights to education quality to fighting discrimination to mental health resources on campus for students, faculty and staff—issues that will inspire our friends and colleagues and build an even strong sense of solidarity on campus.

Leaders in the labour and student movements have always been at the forefront of change; we must remember that together, we are stronger and united, we will never
be defeated. AM

Nour Alideeb is the Chairperson of the Canadian Federation of Students-Ontario.

2018 Worldviews Lecture: The challenges of free speech on campus

From the Spring 2018 Issue

The Worldviews Lecture is a lively forum to advance mutual understanding of the relationships, challenges, and potential of the academy and media.

At this year’s Worldviews Lecture, Professor Sigal Ben-Porath addressed the increasingly heightened debate around free speech on campus. Her lecture was followed by a panel discussion that explored challenges for democratic values and minority rights in academia and beyond.

In her lecture, Professor Ben-Porath reflected on campus free speech controversies of recent years—from cancelled speakers to physical fights—and suggests that campuses need to reaffirm their commitment to both free speech and inclusion, with the understanding that both are tightly linked to the academic mission.

Referring to ideas she wrote about in her book, Free Speech on Campus, Professor Ben-Porath presented three levels of the debate for discussion:

Substance: What can be talked about, and are there things that should not be said? Must universities stay neutral regarding campus speakers?

Impact: Are certain views too hurtful to voice? Must they be silenced to avoid negative psychological or social consequences? Are universities considering the impacts of their decisions around these issues?

Public perception: Campus speech debates are often inaccurately portrayed and ineffectively addressed in the media. Open inquiry and the discussion of controversial ideas are an integral part of the academic mission, even if institutional practices could be improved. How can postsecondary institutions ensure the public’s understanding of their work reflects their academic mission?

Professor Ben-Porath argued that universities must become places that protect inclusive freedom, where ideas can be challenged, but where all feel safe to make their opinions heard. She distinguished between intellectual safety and dignitary safety, stating that, while university campuses are places where students should be challenged intellectually, challenging the abilities, rights, or legitimacy of a group of people (particularly those whose voices are already marginalized) actually suppresses speech. Further, she noted that the attempt to weaponize the issue of free speech actually chills speech itself.

Pointing out that debates around speech are not unique to our era, Ben-Porath argued that universities must maintain public standing as institutions that serve the broader community and public interest, not just a small group of loud voices.

She concluded by stating that inclusive campus speech requires understanding:

  • the existing norms for disseminating knowledge on campus (the voices and speech currently accepted);
  • who is responsible for including people and speech topics;
  • the resources available to community members if their expressive or dignitary needs are not being met; and
  • how to ensure university campuses are spaces where a productive dialogue can be sustained.

Professor Ben-Porath’s lecture was followed by a panel discussion moderated by Globe and Mail higher education reporter Simona Chiose, and which featured questions from members of the audience and those watching online.

Jasmin Zine, Professor of Sociology and the Muslim Studies Option at Wilfrid Laurier University, observed that allowing white supremacists space on campus to speak legitimizes their views, regardless of attempts by universities to distance themselves from the debate and claim neutrality. She argued that universities must take responsibility for the consequences of that legitimization. Professor Zine spoke of the need to distinguish between controversial speech and hate speech, and to balance speech rights with human rights. She also spoke to the emotional and intellectual labour required to counter intolerant racist and sexist speech—labour that often has to be undertaken by those already struggling to have their voices heard.

Paul Axelrod, author, retired York University professor, and former Dean of York’s Faculty of Education, agreed with Professor Zine that Canadian hate laws should be applied on campus, but that if there are any doubts about the type of speech, we should err on the side of allowing the speech. He discussed the new dimensions of the debate, which has seen increased harassment online. He believes that the values and practices of free expression and inclusivity can and should be reconciled, and that the policies we adopt should reflect these commitments.

Shree Paradkar, a Toronto Star journalist who writes about discrimination and identity issues, pointed out that many of these controversial speakers already have well-establish platforms, and that denying them the right to speak on campus has very little impact on their ability to make their voices heard. She argued that speech rights are far too important to be used to protect bigotry and that human rights should not be up for debate. Paradkar illustrated how free speech advocates don’t come to the defense of all speech, revealing that this debate isn’t necessarily about speech but about ensuring only certain groups have the right to speech.

Scott Jaschik, CEO and Editor of Inside Higher Ed, started by pointing out that speech laws in the United States (where there is less protection against hate speech) are very different than those in Canada, and so comparing developments in both countries can be problematic. He argued that we need to reject the idea that free speech is disappearing from campuses and that, in fact, it continues to thrive. He described how, in the US, the free speech issue is often tied to money and controversial speakers are paid to tour college campuses to promote their views.

Without the financial incentive, Jaschik argued these individuals would be far less likely to travel around disseminating their views. He also agreed with Paradkar that free speech defenders seem to be very selective about who they choose to defend. Jaschik concluded by stating that he thinks blocking speech is counterproductive, and actually boosts the notoriety of the speaker, affirming and enabling those who want to sensationalize these issues.AM

The full lecture and panel discussion were recorded and can be watched online by visiting the Worldviews website at  worldviewsconference.com.

Humour Matters: It’s time to make meaningless words great again

When it comes to humour about public funding, there really is no way to compete with reality. The last time the basic funding model for Ontario universities was changed, the Maple Leafs were winning Stanley Cups. Read that sentence again—the Maple Leafs were winning Stanley Cups. Unless you are a scholar of ancient history, you’ll probably have to Google the date.

But only a fool would advocate a wholesale budgetary revision, since the direction of change always seems to be down. Even with contract workers teaching 237 per cent of courses, funds seem perpetually short. Tuition has gone up and up, while students spend more time working for wages than ever before, which surely explains the small crowds for my lectures on Canadian wheat and nineteenth century railways.

In general, it’s hard to be optimistic about the future. Although, with so many picket lines ready to be deployed, at least Woody Guthrie songs will be back in fashion.

That said, the prospect of hours doing experiential learning on cold picket lines got me thinking about new revenue streams. Like, if we could put a tax on buzzwords, half our problems would be solved. All I hear from universities nowadays is transform, commercialize, incentivize, innovate, mobilize knowledge, and cultivate Excellence (that’s Excellence with a capital “E” thank you very much), words that sound emptier than the methodology section of my last grant application.

I mean, the family swear jar has exercised a powerful hold on my children. They say a bad word, they throw their allowance in the jar, incentivizing their mouths to avoid the interesting vocabulary they pick up during sleepovers at Granny’s. Perhaps the academic equivalent— let’s call it the iJAR—could be strategically placed on the podium at ministerial press conferences. Not only would this incentivize the mobilization of less meaningless changicity, but university revenues would skyrocket.

Or, since conservatives like user fees so much, we could charge them two dollars for every use of the term “political correctness.” Talk about meaningless words: undergrads won’t even stay to the end of my lectures, but apparently I have some totalitarian ability to fill their minds with neo-Marxist postmodernism.

That doesn’t even consider the amount of carbon dioxide being expelled during my cranky old-man lectures, no doubt accelerating climate change even as its existence is being discussed. Could we figure out a mechanism to regulate and monetize my hot air? Just for convenience, we’ll call it Grouch and Trade.

There must be a million missed chances for new revenues, from co-branded experiential exams to transformative commercialization of empty classes through Airbnb. The list just goes on and on. But, if you want to hear those empty words, you’ll have to drop a toonie in the iJAR.

I suppose that if we really want to mobilize knowledge and cultivate generalized Excellence, we could fund universities by having citizens contribute a percentage of their income to the government, based on a sliding scale, with the funds distributed to public goods based on widely shared objectives. Nah, forget it, that utopian scheme will never work. Only a Leafs fan could dream so big. AM

Steve Penfold is an Associate Professor in the Department of History at the University of Toronto.

Unintended consequences: The use of metrics in higher education

Metrics are used throughout Ontario’s postsecondary education system—for determining university funding, judging institutional performance, and gauging student perceptions. But metrics are not always the best tool for evaluation, and often have unintended consequences.

Measured scepticism

Statistical measures, or “metrics” as we are now expected to call them, have become as extensive in higher education as they are deplored. The growth in the use of metrics has been neither recent nor restricted to Ontario. Faculty are therefore unlikely to be able to reverse metrics’ rise. But faculty could displace metrics from their core role of teaching and learning by promoting peer review of teaching, which is a far more valid indicator of teaching quality, may support teaching and learning as a community endeavour, and would remain very much the responsibility of individual faculty, rather than the domain of central data collectors and analysts.

Ambivalence about metrics

In an article published in 2000, English academic Malcolm Tight amusingly but informatively compared the ranks of English soccer clubs and universities. His work confirmed that there was a close relation between the distribution of universities and soccer clubs and the population of English cities and larger towns. Tight also found that, in many cities and towns, local universities shared similar ranks to local soccer clubs (if a university was ranked in the top ten, so was the soccer club). However, universities in the South of England were more likely to rank much higher than local soccer clubs, while universities in the North and Midlands were more likely to rank much lower.

Both soccer clubs and universities gain a considerable advantage from being old and well-established, and gain a further advantage when they have a higher income than their competitors (whether through endowments, tuition fees, ticket prices, or merchandise), something which is also strongly related to how long the club or university has been operating. University ranks are also similar to English soccer team ranks in that they are dominated by a stable elite that changes little over time.

Tight’s comparison of ranks illustrates an ambivalence with the usefulness of ranks and, more generally, with metrics, statistical measures, and performance indicators. On the one hand, these ranks seem to democratize judgments and decision-making about specialized activities. Those who know little about English soccer can readily determine the most successful clubs by scanning the league ranks. On the other hand, some highly ranked clubs may play too defensively and thus may not be considered by aficionados to play the “best” soccer. Ranking soccer clubs only by their winning ratio ignores more sophisticated judgements about the quality of the football they play.

Government funding and metrics

The Ontario Ministry of Advanced Education and Skills Development (MAESD) and its predecessors have long allocated funds to colleges and universities predominantly according to their level of enrolment. However, over the last decade MAESD has relied increasingly on performance indicators to monitor postsecondary institutions and influence their internal decisions. MAESD has been reporting each college’s rates for student satisfaction, graduation, graduate satisfaction, graduate employment, and employer satisfaction. For each university, the Council of Ontario Universities reports data on applications, student financial assistance, enrolments, funding, faculty, degrees awarded, and graduates’ employment outcomes.

Ontario universities get most of their operating revenue from tuition fees (38%), MAESD (27%), the federal government (11%), other Ontario ministries (4%), and other sources (20%).1 Only four per cent of MAESD’s operating funding is allocated according to performance indicators, meaning that just over one per cent of Ontario university revenue is allocated in this way.2 Yet performance funding and its indicators have been debated extensively.

Even more contentious is MAESD’s differentiation policy, which is informed by the Higher Education Quality Council of Ontario’s (HEQCO’s) analysis of metrics. The policy is primarily implemented through metrics-heavy strategic mandate agreements negotiated between the province and each university. Further, in a recent article for Academic Matters, the executive lead of Ontario’s University Funding Model Review, Sue Herbert, expressed a need for more “information, data, and metrics that are transparent, accessible, and validated.”3

It is therefore easy to conclude that MAESD’s direction for colleges and universities is driven by metrics that allow government officials and ministers to make judgements about institutions without a detailed familiarity with, or expertise in, postsecondary education. This is similar to arrangements in other Canadian provinces, a number of US states, the United Kingdom, and other countries, where governments and ministries have greatly increased their reliance on metrics.

There are three obvious alternatives to this scenario. The overwhelming preference of college and university management and staff is for governments to leave more decisions to the institutions alone. Funding would be provided to universities with few strings attached, tuition fees would be unregulated, and universities would be able to pursue their own visions for education, free of government interference. However, such a scenario undermines the democratic power of Ontario citizens, which is exercised through the provincial government and its delegates.

The second alternative would be for ministers and ministries to return to making decisions about postsecondary education by relying on their own judgement, attitudes, impressions, and others’ anecdotes, as well as the advice of experts. This is opaque and relies on a high level of trust that decisions aren’t affected by partisan interests or personal prejudices.

A third alternative would be for the government to delegate decisions to an intermediate or buffer body of experts in postsecondary education who would make decisions according to a combination of their own judgements, expertise, experience, and metrics. This was investigated by David Trick for HEQCO, who concluded that:

An intermediary body could be helpful as the Ontario government seeks to pursue quality and sustainability through its differentiation policy framework. Specifically, such a body could be useful for pursuing and eventually renewing the province’s Strategic Mandate Agreements; for strategic allocation of funding (particularly research funds); making fair and evidence-based decisions on controversial allocation issues; and identifying/incentivizing opportunities for cooperation between institutions to maintain access and quality while reducing unnecessary duplication. 4

However, governments and ministries are concerned that buffer bodies restrict their discretion and reflect the interests of the institutions they oversee more than the governments and public they are established to serve. In fact, the UK recently dismantled its higher-education buffer body, the Higher Education Funding Council for England.

Institutional actors

Metrics are also tools for transferring evaluation and monitoring from experts, who are usually the people conducting the activity, to people and bodies who are distant in location and seniority, often senior management located centrally. No organization in Ontario or Canada has replicated the detail of the University of Texas’ task force on productivity and excellence, which compiled data on each professor’s pay, teaching load, enrolments, mean grade awarded, mean student evaluation score, amount of grants won, and time spent on teaching and research. The data on 13,000 faculty in nine institutions was published in a spreadsheet of 821 pages in response to open-records requests.

Metrics are tools for transferring evaluation and monitoring from experts to people who are distant in location and seniority.

HEQCO’s preliminary report on the productivity of the Ontario public postsecondary education system compared data for Ontario’s college and university sector with those for all other provinces, examining enrolments, faculty/student ratios, funding per student, graduates, graduates per faculty, funding per graduate, tri-council funding per faculty, citations per faculty, and faculty workload. OCUFA criticized that report for being preoccupied with outputs at the expense of inputs such as public funding and processes such as student engagement, as well as for its narrow focus on labour market outcomes, which excluded postsecondary education’s broader roles of educating and engaging with students and the community.

In a subsequent report for HEQCO, Jonker and Hicks went further, analyzing data on individual faculty that were publicly posted on university websites and elsewhere. HEQCO wrote that the report:

conservatively estimates that approximately 19% of tenure and tenure-track economics and chemistry faculty members at 10 Ontario universities sampled demonstrated no obvious recent contribution of scholarly or research output, although universities generally adhere to a faculty workload distribution of 40% teaching, 40% research and 20% service.

Extrapolating from that sample, the authors say that Ontario’s university system would be more productive and efficient if research non-active faculty members compensated for their lack of scholarly output by increasing their teaching load to double that of their research-active colleagues—for an 80% teaching and 20% service workload distribution.5

This report illuminates several issues with using metrics to measure productivity. Neither of the authors is a chemist, yet they felt competent, based on their use of metrics, to judge chemists’ scholarly “output” and workload. Neither author works at a university with chemists, yet they believed it was appropriate for them to propose major reallocations of university chemists’ workloads. These problems led to extensive criticisms of the report’s method and conclusions.

The report also made economics and chemistry faculties’ work more visible for public scrutiny and, possibly, more accessible for public regulation. This led to the report being praised for promoting the extension of democratic authority over public bodies. Under this argument, the report’s partial and incomplete data and crude, reductive methods were not grounds for abandoning the project but for strengthening its data and method.

A similar trend has been occurring within Ontario colleges and universities over the last two decades. Central administrations in Ontario’s postsecondary institutions have long collected data to allocate funds internally and have increasingly collected and analyzed data to assess and monitor their institution’s performance. Ontario universities now analyze extensive metrics to evaluate their institutional plans and performance. By a process of mimetic isomorphism—the tendency of an organization to imitate another organization’s structure—institutions tend to allocate funds and evaluate performance internally according to the criteria on which their own funds are received and their performance evaluated. These measures are replicated, to varying extents, by faculties. While immediate supervisors and heads of departments still seem to share enough expertise and interests with faculty to trust in their own judgment and that of their faculty members, they still need to take account of the metrics used by senior administrators in their institution.

Ontario universities now analyze extensive metrics to evaluate their institutional plans and performance.

Unintended consequences

A common criticism of the use of metrics is that they can have unintended and undesirable consequences by distorting the behaviour of those being measured. This idea was expressed rigorously by British economist Charles Goodhart, who wrote that an observed statistical regularity tends to collapse once it is used as a target. There are various formulations of this idea, which has come to be known as Goodhart’s law. Similarly, Donald Campbell writes that, “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”6

In his paper on Goodhart’s law and performance indicators in higher education, Lewis Elton argued that performance indicators are a tool of public accountability that direct attention away from important processes, undermine academic professionalism, and reflect an erosion of trust in individual academics. However, he was not uncritically protective of academics, and argued that most traditional assessments of students use proxies that are similar to performance indicators (PIs). He argues that most grading is unreliable, suffering the methodological flaws of accountability through metrics:

Much of this traditional assessment is largely through the equivalent of PIs, with all the faults that stem from Goodhart’s Law…

Also, it may be noted that what are normally called summative and formative assessment correspond to PIs used as a control judgmentally and PIs used as a management tool for improvement.

As long as academics use traditional examinations to assess students, they really have no right to complain if the Department of Education and Skills assesses them through quantitative PIs and targets (emphasis in original). 7

Implications for faculty

Much of the data upon which metrics are based are collected from faculty, adding unproductive work to their many other duties. The resources invested in collecting, reporting, and analyzing metrics are diverted from academic activities. Metrics are a tool for shifting power from those who do work to those who monitor that work. They also shift power from experts to those who can interpret descriptive statistics. For both reasons, metrics are also a tool for shifting power from those who are lower down in an organization to those who are higher up. Metrics may change faculty priorities and increase the pressure to improve their performance on the measures monitored, as Jeanette Taylor found for some of the 152 academics she surveyed at four Australian universities. Metrics are likely to reduce faculty’s discretion over the work they do and how it is evaluated. Metrics are also likely to intensify faculty work.

Metrics are limited and many have methodological flaws. Yet, rather than pausing the use of metrics, pointing out their problems leads to increased investment in attempts to make them more extensive and rigorous. This in turn increases demands on faculty to provide more and better data. Metrics are widespread in postsecondary education in many jurisdictions other than Ontario, and are pervasive in elementary school education. This suggests that faculty can do little more than moderate and perhaps redirect the metrics that flood over the sector. However, there is a major action that faculty can and should take that would redress much of the current distortion of metrics: promote widespread peer review of teaching.

There is currently no direct measure of the quality of teaching. This does not, of course, prevent believers in metrics from seeking to evaluate teaching by proxies such as student satisfaction and graduation rates. Compilers of ranks also incorporate faculty/student ratios and faculty reputation surveys. In contrast, all the measures of research performance are aggregations of peer evaluations: Manuscripts are published on the recommendations of peer reviewers moderated by editors who are experts in the field, citations are by authors published in the field, and grants are awarded on the recommendations of experts moderated by chairs who are experts in the field.

Teams of scholars have developed comprehensive criteria, protocols, and processes that provide frameworks for the peer review of teaching. Typically, reviews are the responsibility of faculties, with the support of an expert in teaching and learning; reviewers are chosen by the faculty member from a team of accredited reviewers; the review is of the whole course, not just the observation of teaching events; and the faculty member meets their reviewers at least once before and after the review. In Canada, reviews are required for promotion and tenure
at some Canadian universities, such as the University of British Columbia, as they are at several universities in the United States, the UK, and Australia.

Peer review of teaching should become an important counterweight to the excessive reliance on research for evaluating the performance of institutions and faculty, as well as the excessive reliance on student satisfaction to evaluate faculty and institutions, and on graduation rates to evaluate institutions. Peer review of teaching enables teaching to become a community endeavour and, of course, remains very much the responsibility of individual faculty, rather than central data collectors and analysts.

Measured progress

Metrics have had a long and extensive history in higher education, despite the extensive critiques they have attracted and notwithstanding the clear dangers they pose. They are pervasive in Ontario, and probably more so in other jurisdictions in Canada, the US, the UK, and elsewhere. While faculty may curb the worst excesses of metrics, it seems unlikely that they will reverse metrics’ advances. But there is a prospect of diverting the application of metrics from one of faculty’s core activities and responsibilities, teaching and learning. Faculty can do this by promoting the peer review of teaching, which is a far more valid indicator of teaching quality than the proxy metrics that are currently used. AM

Gavin Moodie is Adjunct Professor in the Department of Leadership, Higher, and Adult Education at the Ontario Institute for Studies in Education at the University of Toronto.

1. HEQCO, “The Ontario university funding model in context,” 2015, http://www.heqco.ca/SiteCollectionDocuments/Contextual%20Background%20to%20the%20Ontario%20University%20Funding%20Formula-English.pdf.
2. Ministry of Training, Colleges and Universities, “University funding model reform consultation paper,” 2015, http://news.ontario.ca/tcu/en/2015/03/ontario-launches-consultations-on-university-funding-reform.html.
3. Sue Herbert, “Reviewing Ontario’s university funding,” Academic Matters, 2016 Jan., https://academicmatters.ca/2016/01/reviewing-ontarios-university-funding.
4. David Trick, “The role of intermediary funding bodies in enhancing quality and sustainability in higher education,” HEQCO, 2015, http://www.heqco.ca/en-ca/Research/Research%20Publications/Pages/Summary.aspx?link=196.
5. Linda Jonker & Martin Hicks, “Teaching loads and research outputs of Ontario university faculty: implications for productivity and differentiation,” HEQCO, 2014.
6. Donald T. Campbell, “Assessing the impact of planned social change,” Evaluation and Program Planning, 2:1 (1979): 67-90.
7. Lewis Elton, “Goodhart’s law and performance indicators in higher education,” Evaluation and Research in Education, 18:1,2 (2004): 120-128.

Is there a metric 
to evaluate tenure?

How much can data meaningfully inform decisions about tenure? If data only tell part of the story, perhaps faculty should be evaluated so that their different lived experiences are also taken into consideration.

Tenure is one of the foundational concepts upon which Canada’s modern university system has been built. It is not without its issues, of course: Faculty often struggle in their early years as they work toward earning tenure.

How much can data-gathering inform tenure decisions? Or reflect the efforts and experiences of individuals who go through years of tenure-track in the hopes of meeting the expectations for tenure?

Data about tenure

Let’s begin by considering that there are data available about tenure—important and interesting data. One example is data from the CAUT Almanac of Post-Secondary Education in Canada (see Table 1), which provides quantitative details about the number of faculty with tenure, in tenure-track positions, and in other types of positions. The data are constrained by changes in government policy regarding the collection of the data and, to the best of our knowledge, more recent figures are not readily available. (While it is tangential to this article, when using data for evaluative purposes it is important to consider the possibility that data collection can be altered over time, causing the evaluative process to lose clarity.)

What Table 1 reveals is how rapidly faculty tenure numbers can change. The decline from 27,633 to 19,137 tenured faculty seen between 2006/2007 and 2007/2008 seems to defy the often suggested permanence of tenure. Similarly, the “Other” column values decline dramatically, from 11,361 to 3,822 between 2004/2005 and 2005/2006. These data allow us to evaluate the stability of faculty and confirm the “precarious” moniker used to describe university faculty. In that sense, data-gathering does facilitate some understanding of tenure.

Another collection of data, also from the CAUT Almanac, outlines the number of faculty within different ranks (see Table 2). It is often assumed that promotions hinge on tenure; however, the data clearly show that policies do allow promotion and tenure to be treated somewhat independently. Consider, for example, that in 2004/2005 there were 15,543 tenured faculty but 21,945 full and associate professors. This certainly allows one to conclude that university policies are not as simple as might be assumed. Over the decade covered by the table, the number of full professors increased by 24 per cent, associate professors and assistant professors both increased by 47 per cent, and lecturers by 151 per cent—demonstrating that growth has occurred and that it has favoured non-research faculty. While this category may include permanent faculty with no research requirement, it also includes sessional instructors, who hold the least permanent positions. Both undermine one of the core missions of the university: research. The data also reveal that new research faculty are embedded in a situation with proportionately fewer research mentoring options.

The stories the data do not tell

Although the data presented show various trends and phenomena, they do not, in and of themselves, provide information about tenure decision-making or the experiences of faculty members entering academia. Even if the CAUT Almanac analysis was done at the institutional level and by subject discipline, which is feasible, the information would not suffice. There should be skepticism about its value for informing individual faculty members or tenure and promotion committees about the process of achieving tenure.

Previously, we shared our concerns that evaluative data did not reflect personal decision-making or lived experiences. This led us to look for narratives about the meaning of tenure-track. We were both well established in public education, Tim as an experienced teacher and Tory in educational leadership, before moving to positions in higher education, giving up employment security to pursue tenure. In making that decision, it would have been helpful to have data showing the success rate of achieving tenure. This would have allowed us to make more informed decisions about moving to higher education and, perhaps, such data would illustrate just how precarious it is to pursue tenure. In addition, seeing a description of what goes wrong during tenure-track and what leads to failed attempts at achieving tenure would have allowed us to self-evaluate throughout the process. Instead, tenure-track faculty are given specious advice from colleagues in building corridors, rather than factual evaluative data.

Tenure-track faculty are given specious advice from colleagues in building corridors, rather than factual evaluative data.

It is these kinds of considerations that make one wonder if the control of data collection, and the choices about what data are collected, are being manipulated to avoid revealing or facilitating forms of evaluation that might lead to demands for changes to institutional systems and governance.

This is problematic. For instance, data could be collected that enumerate the number of publications in various categories between successful and unsuccessful applications by subject area, immediately reinforcing notions of tenure being based upon “bean-counting,” and therefore helping to diminish academic freedom. How can faculty be innovative and responsive to the dynamic nature of research if they are ultimately focused on tabulations in fixed categories to achieve tenure? Academic freedom addresses this by ensuring that professors can make choices best suited to the innovations arising from their research. Tenure cannot be measured in terms of static, quantitative achievements that defy the dynamic university environment that is supposed to facilitate innovation.

These considerations led us to develop a book about tenure-track experiences. The Academic Gateway: Understanding the Journey to Tenure1,  brings together narratives of tenure-track experiences from across Canada. To make the task tractable, it focuses on faculty members in education, a discipline in which work experience as a teacher often informs the professorial role. This is not to suggest that other disciplines do not have overlap between careers and academia, only that education was viewed as sharing one meaningful overlap—teaching is 40 per cent of the assignment. In this way, we feel the narratives are likely more biased toward illustrating smoother transitions than might be found in other disciplines. However, they also demonstrate that there is a considerable diversity of experience.

The book includes authors from every province, and approximately equal numbers of academics who are early, midway, and late in the tenure-track process. As well, there is gender equity across the chapters. However, knowing all this about the book does not allow one to evaluate the experiences that arise within its chapters. These differences are reflected in the varied ways in which the authors entered higher education—for example, some worked while pursuing graduate school and others took time away from their careers—and the experiences they had after joining the academy. The chapters speak to how lived experiences can be much more illuminating than simple quantitative data. In essence, it is the recognition of the importance of these varied experiences that obliges the use of peer evaluation, rather than just data, to make decisions about tenure.

Consider the following quotes from The Academic Gateway that show a variety of faculty experiences and speak to very different circumstances when entering academia: “My wife and I left permanent education positions, financial security, and family networks in Alberta… as I was offered a one-year term position as an assistant professor… This was not a tenure-track appointment, but I was told it would turn into one.” Another writer speaks of the emotional strain of moving: “In my new surroundings—living alone for the first time in 29 years, feeling lost and lonely, and having left family and friends behind—I began to seriously doubt my ability to cope.” There are also changes in stature noted:
“I am on a steep learning path. Transitioning from a position where I had designated authority to a position as a junior faculty member means I am negotiating and navigating the bounds of my role.” While a critical eye can point to these experiences and say that none are relevant to tenure considerations, they are evidence of the deep changes that moving into academia can have on different individuals. For example, the following quote describes a formal application within tenure-track and demonstrates issues with data-gathering and requiring applicants to conform to specific criteria:

This template featured categories and requirements entirely (and appropriately) geared to an academic career; but I found myself, for example, unable to list technical papers and publications I had written because policies in government had prohibited named authorship. This issue turned out to be the tip of the iceberg, as the methods, vocabulary, and rhetorical strategies I used to succeed in government did not transfer as readily as expected.

This issue may have become even more prevalent in recent decades, because graduate students tend to be older, taking longer to graduate as they work to pay for their education.

One difficulty that arises with data collection is that it does not recognize the fine details that are part of lived experiences. The absence of these details creates an environment in which academic supervision does not have to consider anything other than quantitative data. However, many authors in the book speak of circumstances in their lived tenure-track experience that needed a more nuanced analysis and a more personalized response in order to support their personal and professional growth. It is alarming to contrast the personal upheaval of moving into the academy with an impersonal, data-oriented approach that is oblivious to personal struggles.

One difficulty that arises with data collection is that it does not recognize the fine details that are part of lived experiences.

The narratives in the book speak to the human side of tenure-track. While some may favour a quantitative approach to tenure decisions, the book’s qualitative narratives are demonstrable evidence that this approach does not work. Innovative research is rarely clear-cut and frequently ill-defined, but here it speaks to the very core of what tenure is.

A model of tenure: Grief and self-determination

After the book was published, we continued thinking about tenure and about the book as a source of cross-case narratives. This led to a model of tenure that comprises two components. The first component addresses the change in workplace culture when one moves into a tenure-track position: You need to learn the ways of a new institution, a sense of isolation occurs, and a loss of community becomes apparent when individuals move long distances. We address these essentially as a grieving process that models the sense of loss associated with prior workplace colleagues, community familiarity, and a loss of capacity to efficiently address fairly routine tasks.

It appears that, as tenure-track progresses well for individuals, this grief component of the model runs its course. Every professor who is on track to earn tenure, or believes that they are, will overcome any and all of the concerns that led to inclusion of this grief component. Of course, in some cases, individuals simply do not manifest any grief. For example, a person who moves directly from graduate studies to a tenure-track position at the same institution might transition quite seamlessly. We have also hypothesized that when an individual perceives that their tenure-track is not progressing in a successful fashion, the grief will manifest itself in more pronounced ways that reflect a continued sense of regret.

The other component of the model that endures after earning tenure is self-determination. This is a complex theory that reflects the wide array of university professors’ skills and tasks.2 Perhaps the most significant feature of this component is the aspect of autonomy—the capacity to self-direct one’s work. It is not the absence of having to choose (i.e., one cannot use self-determination to justify being lethargic when they have tenure), but about choosing from the different options that arise. In terms of research, this includes academic freedom, but it also includes making choices when academic freedom is not a significant issue.

One of the consequences of this model is that self-determination theory is a grand theory that will not easily succumb to quantitative evaluation. How can one, for example, measure autonomy? Even if it could be measured, as one progresses through the tenure-track years, how much autonomy is required to grant tenure? In this sense, the research aspect of higher education requires freedom to explore where one’s expertise suggests they should explore.

Within the academy, it is clearly important for tenured faculty to have intrinsic motivation. This is a cornerstone of self-determination theory, but runs contrary to having a strictly measured criterion for tenure success. There is a fundamental difficulty fostering intrinsic motivation if specific details will be used to decide the ultimate outcome. The paradoxical nature of tenure decisions manifests itself in the need to assess how well an individual grows when their role is largely self-determined. To have a pre-defined measure would bias the self-determined element and thus defy the very definition of tenure. What is important within self-determination theory is the relatedness inherent within it, whereby circumstantial relationships contribute positive feedback that helps develop intrinsic motivation. Rather than suggesting the importance of using this data, it points to a need to further humanize the process and encourage social interaction through the tenure-track years as a way to support both personal and professional growth. Just as autonomy does not support post-tenure lethargy, intrinsic motivation does not support treating tenure-track faculty as if they live and work within a vacuum.

Fundamentally, self-determination theory operates within individuals as people. Furthermore, evaluation is a broadening of the notion of self that brings peers into the process of evaluation. Peer review is an approach to evaluation that shares the experience of developing one’s self-determination. It is qualitative because of the small sample—the evaluation of a single individual—and facilitates the human capacity for exercising choice. Tenure is fundamentally about personal growth and developing niche expertise, neither of which is suitable for a data-collection approach. It is, however, exactly what some qualitative methods were developed to reveal.


It is disturbing that ideas for altering evaluative approaches do not seem to consider the theoretical grounding of what is being proposed. The Academic Gateway does not provide details about which authors have achieved tenure. This is left purposely unresolved so that readers can consider how they might assess each individual’s merit for tenure. We doubt that anyone can come up with a data-collection approach that will successfully appraise the eventual tenure decisions of the book’s authors. AM


Tim Sibbald is an Associate Professor in the Schulich School of Education at Nipissing University. Victoria Handford is an Assistant Professor in the Faculty of Education and Social Work at Thompson Rivers University.

1. TM Sibbald & V Handford (Eds.), The Academic Gateway: Understanding the Journey to Tenure, Ottawa: University of Ottawa Press, 2017. For the quotes contained in this article, see pp. 29–40, 93–110, 179–196, and 249–264.
2. EL Deci & RM Ryan, “The ‘what’ and ‘why’ of goal pursuits: Human needs and the self-determination of behavior,” Psychological Inquiry, 11:4 (2000): 227–268; AV Broeck, DL Ferris, CH Chang, & CC Rosen, “A review of self-determination theory’s basic psychological needs at work,” Journal of Management, 42:5 (2016): 1195–1229.

The abuses and perverse effects of quantitative evaluation in the academy

The world of academic research is scored according to so-called “objective” measures, with an emphasis on publications and citations. But the very foundations of this approach are flawed. Is it time to abandon these simplistic ranking schemes?

Since the neoliberal ideology of the “new public management” and its introduction of rankings in academia began in the 1990s, researchers and administrators have become increasingly familiar with the terms “evaluation,” “impact factors,” and “h-index.” Since that time, the worlds of research and higher education have fallen prey to a dangerous evaluation fever. It seems that we want to assess everything, including teachers, faculty, researchers, training programs, and universities. “Excellence” and “quality” indicators have proliferated in usage without anyone really understanding what these terms precisely mean or how they are determined.

“Excellence” and “quality” indicators have proliferated in usage without anyone really understanding what these terms precisely mean or how they are determined.

Bibliometrics, a research method that considers scientific publications and their citations as indicators of scientific production and its uses, is one of the primary tools that informs the many “excellence indicators” that this administrative vision of higher education and research is attempting to impose on everyone. Whether ranking universities, laboratories, or researchers, calculating the number of publications and citations they receive often serves as an “objective” measure for determining research quality.

It is therefore important to understand the many dangers associated with the growing use of oversimplified bibliometric indicators, which are supposed to objectively measure researchers’ productivity and scientific impact. This paper focuses on analyzing two key indicators used extensively by both researchers and research administrators. It also examines the perverse effects that the oversimplified use of bad indicators has upon the dynamics of scientific research, specifically in the areas of social and human sciences.

The impact factor: Corrupting intellectual output

A journal’s impact factor (IF) is a simple, mathematical average of the number of citations received in a given year (e.g., 2016) for articles published by a journal during the previous two years (in this case, 2014 and 2015). The IF has been calculated and published every year since 1975 in the Web of Science Journal Citation Reports. As early as the mid-1990s, experts in bibliometrics were drawing attention to the absurdity of confusing articles and journals. However, this did not stop decision-makers—who themselves are supposedly rational researchers—from using a journal’s IF to assess researchers and establish financial bonuses based directly on the numerical value of the IF.
For example, as the journal
Nature reported in 2006, the Pakistan Ministry of Science and Technology calculates the total IF of articles over a year to help it establish bonuses ranging between $1,000 and $20,000. The Beijing Institute of Biophysics established a similar system: An IF of 3 to 5 brings in 2,000 yuan ($375) per point and an IF above 10 brings in 7,000 yuan ($1,400) per point.

However, in an editorial in the same issue, Nature criticized this system, noting that it is impossible for a mathematical journal to score an IF value as high as a biomedical research journal due to the substantially larger number of potential citers in the biomedical sciences. No sensible person believes that biomedical articles are superior to math articles, nor can they believe that this scoring system justifies granting one group of authors a larger bonus than another group. And, in another more recent (and ugly) example of the kind of intellectual corruption generated by taking the ranking race seriously, universities have contacted cited researchers who are working for other institutions and offered these researchers compensation for including the university as an affiliated body in the individual’s next article.1 These fictitious affiliations, without real teaching or research duties, allow marginal institutions to enhance their position in university rankings without having to maintain real laboratories.

These extreme cases should be enough to warn university managers and their communications departments away from the use or promotion of such inaccurate rankings. In short, it is important to scrutinize the ranking system’s “black box,” rather than accepting its results without question.

The exploitation of these false rankings and indicators to promote institutional and individual achievement is a behaviour that reveals an ignorance of the system’s flaws. Only the institutions that benefit from association with these rankings, researchers who profit from incorrectly computed bonuses based on invalid indicators, and journals that benefit from the evaluative use of impact factors, can believe—or feign to believe—that such a system is fair, ethical, and rational.

The h index epidemic

In the mid-2000s, when scientific communities started devising bibliometric indices to make individual evaluations more objective, American physicist Jorge E. Hirsch, from the University of California in San Diego, came up with a proposition: the h index. This index is defined as being equal to the number N of articles published by a researcher that received at least N citations since their publication. For example, if an author has published 20 articles, 10 of which were cited at least 10 times each since their publication, the author will have an h index of 10. It is now common to see researchers cite their h index on their Facebook pages or in their curricula vitae.

The problematic nature of the h index is reflected in the very title of Hirsh’s article published in a journal that is usually considered prestigious, the Proceedings of the National Academy of Sciences of the United States of America, “An index to quantify an individual’s scientific research output.” In fact, this index is neither a measure of quantity (output) nor a measure of quality or impact: It is a combination of both. It arbitrarily combines the number of articles published and the number of citations received. In the eye of its creator, this index was meant to counter the use of the total number of articles published, a metric that does not take their quality into account. The problem is that the h index is itself strongly correlated with the total number of articles published, and is therefore redundant.

Furthermore, the h index has none of the basic properties of a good indicator. As Waltman and van Eck demonstrated, the h index is incoherent in the way it ranks researchers whose number of citations increases proportionally, and it therefore “cannot be considered an appropriate indicator of a scientist’s overall scientific impact.”2

This poorly constructed index also causes harm when it is used as an aid in the decision-making process. Let us compare two scenarios: A young researcher has published five articles, which were cited 60 times each (for a given period); a second researcher, of the same age, is twice as prolific and wrote 10 articles, which were cited 11 times each. The second researcher has an h index of 10, while the first researcher only has an h index of 5. Should we conclude that the second researcher is twice as “good” as the first one and should therefore be hired or promoted ahead of the first researcher? Of course not, because the h index does not really measure the relative quality of two researchers and is therefore not a technically valid indicator.

Despite these fundamental technical flaws, use of the h index has become widespread in many scientific disciplines. It seems as though it was created primarily to satisfy the ego of some researchers. Let us not forget that its rapid dissemination has been facilitated by the fact that it is calculated automatically within journal databases, making it quite easy to obtain. It is unfortunate to see scientists, who purportedly study mathematics, lose all critical sense when presented with this flawed and oversimplified number. It confirms an old English saying, “Any number beats no number.” In other words, it is better to have an incorrect number than no number at all.

A multidimensional universe

What is most frustrating in the debates around research evaluation is the tendency to try to summarize complex results with a single number. The oversimplification of such an approach becomes obvious when one realizes that it means transforming a space with many dimensions into a one-dimensional space, thus realizing Herbert Marcuse’s prediction of the advent of a One-Dimensional Man. In fact, by combining various weighted indicators to get a single number, we lose the information on each axis (indicators) within the multidimensional space. Everything is reduced to a single dimension.

Only by considering the many different initial indicators individually can we determine the dimensions of concepts such as research quality and impact. While postsecondary institutions and researchers are primarily interested in the academic and scientific impact of these publications, we should not ignore other impacts for which valid indicators are easily accessible. Think of the economic, societal, cultural, environmental, and political impacts of scientific research, for example.

In the case of universities, research is not the only mission and the quality of education cannot be measured solely by bibliometric indicators that ignore the environment in which students live and study, including the quality of buildings, library resources, or students’ demographic backgrounds. For these dimensions to emerge, we must avoid the “lamppost syndrome,” which leads us to only look for our keys in brightly lit places rather than in the specific (but dark) places where they are actually to be found. It is therefore necessary to go beyond readily accessible indicators and to conduct case studies that assess the impacts for each of the major indicators. It is a costly and time-consuming qualitative operation, but it is essential for measuring the many impacts that research can have.

The simplistic nature of rankings culminate in annual attempts to identify the world’s “best” universities, as if the massive inertia of a university could change significantly every year! This in itself should suffice to show that the only aim of these rankings is to sell the journals that print them.

The simplistic nature of rankings culminate in annual attempts to identify the world’s “best” universities, as if the massive inertia of a university could change significantly every year!

Quantifying as a way to control

The heated arguments around the use of bibliometric indicators for assessing individual researchers often neglects a fundamental aspect of this kind of evaluation, which is the role of peers in the evaluation process. Peer review is a very old and dependable system that requires reviewers to have first-hand knowledge of the assessed researcher’s field of study. However, in an attempt to assert more control over the evaluation process, some managers in universities and granting agencies are pushing forward with a new concept of “expert review” in which an individual, often from outside the field of research being considered, is responsible for evaluating its merits. A standardized quantitative evaluation, such as the h index, makes this shift easier by providing supposedly objective data that can be used by anyone. It is in this context that we need to understand the creation of journal rankings as a means to facilitate, if not to mechanize, the evaluation of individuals. This constitutes a de facto form of Taylorization of the evaluation process—the use of a scientific method to de-specialize the expertise needed for evaluation.

Thus surfaces a paradox. The evaluation of a researcher requires appointing a committee of peers who know the researcher’s field very well. These experts would already be familiar with the best journals in their field and do not need a list concocted by some unknown group of experts ranking them according to different criteria. On the other hand, these rankings allow people who don’t know anything about a field to pretend to make an expert judgment just by looking at a ranked list without having to read a single paper. These individuals simply do not belong on an evaluation committee. Therefore, the proliferation of poorly built indicators serves the process of bypassing peer review, which does consider productivity indices but interprets them within the specific context of the researcher being evaluated. That some researchers contribute to the implementation of these rankings and the use of invalid indicators does not change the fact that these methods minimize the role of the qualitative evaluation of research by replacing it with flawed mechanical evaluations.

Pseudo-internationalization and the decline of local research

A seldom-discussed aspect of the importance given to impact factors and journal rankings is that they indirectly divert from the study of local, marginal, or less popular topics. This is particularly risky in human and social sciences, in which research topics are, by nature, more local than those of the natural sciences (there are no “Canadian” electrons). Needless to say, some topics are less “exportable” than others.

Since the most frequently cited journals are in English, the likelihood of being published in them depends on the interest these journals have in the topics being studied. A researcher who wants to publish in the most visible journals would be well advised to study the United States’ economy rather than the Bank of Canada’s uniqueness or Quebec’s regional economy, topics that are of little interest to an American journal. Sociologists whose topic is international or who put forward more general theories are more likely to have their articles exported than those who propose an empirical analysis of their own society. If you want to study Northern Ontario’s economy, for example, you are likely to encounter difficulty “internationalizing” your findings.

Yet is it really less important to reflect on this topic than it is to study the variations of the New York Stock Exchange? As a result, there is a real risk that local but sociologically important topics lose their value and become neglected if citation indicators are mechanically used without taking into account the social interest of research topics in the human and social sciences.

Conclusion: Numbers cannot replace judgement

It is often said—without providing supporting arguments —that rankings are unavoidable, and that we therefore have to live with them. This is, I believe, a false belief and, through resistance, researchers can bring such ill-advised schemes to a halt. For example, in Australia, researchers’ fierce reaction to journal rankings has succeeded in compelling the government to abandon the use of this simplistic approach to research evaluation.

In summary, the world of research does not have to yield to requirements that have no scientific value and that run against academic values. Indeed, French-language journals and local research topics that play an invaluable role in helping us better understand our society have often been the hardest hit by these ill-advised evaluation methods, and so fighting back against this corruption is becoming more important every day.

Yves Gingras is a Professor in the History Department and Canada Research Chair in History and Sociology of Science at the Université du Québec à Montréal.

This article is a translation of a revised and shorter version of the essay, « Dérives et effets pervers de l’évaluation quantitative de la recherche : sur les mauvais usages de la bibliométrie », in Revue international PME 28;2 (2015): 7-14. For a more in-depth analysis, see: Yves Gingras, Bibliometrics and Research Evaluation: Uses and Abuses, Cambridge: MIT Press.

1. Yves Gingras, “How to boost your university up the rankings,” University World News, (2014) July 18;329, http://www.universityworldnews.com/article.php?story=20140715142345754. Refer also to the many responses in Science, (2012), March 2;335: 1040-1042.
2. L Waltman and NJ van Eck, “The inconsistency of the h-index,” 2011, http://arxiv.org/abs/1108.3901.

Dérives et effets pervers de l’évaluation quantitative de la recherche

Les professeurs et les chercheurs universitaires sont de plus en plus évalués à l’aide de mesures dites « objectives », qui mettent l’accent sur les publications et les citations. Mais le fondement même de cette approche est problématique. Le temps est-il venu d’abandonner ces méthodes de notation simplistes?

Avec l’arrivée en milieu universitaire de l’idéologie néolibérale adossée aux techniques du nouveau management public avec ses « tableaux de bord », surtout depuis les années 1990, les chercheurs et les administrateurs utilisent de plus en plus souvent les mots « évaluation », « facteurs d’impact », « indice h ». Le monde de la recherche et de l’enseignement supérieur, est ainsi la proie d’une véritable fièvre de l’évaluation. On veut tout évaluer: les enseignants, les professeurs, les chercheurs, les programmes de formation et les universités. Les indicateurs « d’excellence » et de « qualité » se multiplient sans que l’on sache toujours sur quelles bases ils ont été construits.

Les indicateurs « d’excellence » et de « qualité » se multiplient sans que l’on sache toujours sur quelles bases ils ont été construits.

Parmi les outils utilisés pour mettre au point les nombreux « indicateurs d’excellence » qu’une vision gestionnaire de l’enseignement supérieur et de la recherche tente d’imposer à tous comme une évidence, une place de choix est aujourd’hui accordée à la bibliométrie—méthode de recherche qui consiste à utiliser les publications scientifiques et leurs citations comme indicateurs de la production scientifique et de ses usages. Que ce soit pour classer les universités, les laboratoires ou les chercheurs, le calcul du nombre de publications et des citations qu’elles reçoivent sert souvent de mesure « objective » de la valeur des résultats de recherche des uns et des autres.

Il est donc important de rappeler, même brièvement, les nombreux dangers que comportent l’usage simpliste qui tend à se répandre de l’utilisation mécanique d’indicateurs bibliométriques supposés mesurer de façon « objective » la productivité et l’impact scientifique des chercheurs. Nous nous limiterons ici à analyser les usages des deux principaux indicateurs amplement utilisés tant par les chercheurs que par les administrateurs de la recherche. Nous nous pencherons aussi sur les effets pervers des usages simplistes de mauvais indicateurs sur la dynamique de la recherche scientifique particulièrement dans les domaines des sciences sociales et humaines.

Les mauvais usages du facteur d’impact

Calculé et publié chaque année depuis 1975 dans le Journal Citation Reports du Web of Science (maintenant propriété de Clarivate Analytics) le facteur d’impact (FI) d’une revue consiste en une simple moyenne arithmétique du nombre de citations obtenues une année donnée (disons 2016) par les articles publiés par une revue au cours des deux années précédentes (soit 2014 et 2015). Bien que, dès le milieu des années 1990, des experts en bibliométrie n’aient cessé d’attirer l’attention sur l’absurdité de confondre ainsi les articles et les revues, cela n’a pas empêché les « décideurs » et, il faut le souligner, de chercheurs supposément rationnels, d’utiliser le facteur d’impact des revues pour évaluer les chercheurs et instituer des systèmes de primes fondés directement sur la valeur numérique du facteur d’impact des revues! Comme le rapportait la revue Nature en 2006, le ministère de la Science du Pakistan calcule la somme des facteurs d’impact des articles sur une année pour fixer une prime variant entre 1 000 et 20 000 dollars! En Chine, l’Institut de biophysique de Beijing a établi un système semblable : un FI entre 3 et 5 rapporte.

 Dans un éditorial du même numéro, la revue dénonçait cette absurdité. Or, il est impossible que le FI d’une revue de mathématiques (par exemple) ait jamais la valeur de celui d’une revue de recherche biomédicale! Pourtant, aucune personne sensée ne peut croire que les articles de médecine sont tous supérieurs aux articles de mathématiques et justifient donc d’accorder à leurs auteurs une prime plus importante. Dernier exemple montrant le genre de corruption intellectuelle engendrée par la course aux classements : certaines universités contactent des chercheurs très cités qui sont employés par d’autres institutions et leur offrent d’ajouter leur adresse dans leurs publications en échange d’une rémunération1. Ces affiliations factices, auxquelles aucune tâche d’enseignement ou de recherche n’est attachée, et dont les chercheurs qui y participent sont complices, permettent à des institutions marginales d’améliorer facilement leur position dans les classements des universités sans avoir à créer de véritables laboratoires.

Ces cas extrêmes devraient suffire pour mettre en garde les gestionnaires d’université, ou leurs chargés de communication, contre les usages médiatiques de tels classements douteux. En somme, mieux vaut regarder à l’intérieur de la « boîte noire » des classements plutôt que de l’accepter telle quelle comme si elle contenait un beau cadeau de bienvenue…

L’usage abusif de classements et d’indicateurs faussement précis constitue en somme un comportement qui trahit l’ignorance des propriétés des indicateurs utilisés. Seul l’opportunisme des chercheurs, qui profitent de primes mal calculées, et des revues, qui profitent de l’usage évaluatif des facteurs d’impact, peut les amener à croire, ou à feindre de croire, qu’un tel système est juste et rationnel.

L’épidémie de « l’indice h »

Il est devenu courant de voir des chercheurs indiquer sur leur page face book ou dans leur curriculum vitae leur « indice h ». Au milieu des années 2000, alors que les milieux scientifiques avaient commencé à concocter des indices bibliométriques pour rendre les évaluations individuelles plus « objectives », le physicien américain Jorge E. Hirsch, de l’université de Californie à San Diego, y est allé de sa proposition : l’indice h. Cet indice est défini comme étant égal au nombre d’articles N qu’un chercheur a publiés et qui ont obtenu au moins N citations chacun depuis leur publication. Par exemple, un auteur qui a publié 20 articles parmi lesquels 10 sont cités au moins 10 fois chacun aura un indice h de 10.

Le caractère improvisé de cet indice se voit déjà au titre même de l’article paru dans une revue pourtant considérée comme « prestigieuse », les Proceedings de l’Académie nationale des sciences des États- Unis : « un indice pour quantifier la production (output) scientifique d’un chercheur ». En fait, cet indice n’est ni une mesure de quantité (ouput), ni une mesure de qualité ou d’impact, mais un composite des deux. Il combine de façon arbitraire le nombre d’articles publiés et le nombre de citations obtenues. Cet indice est supposé contrer l’usage du seul nombre d’articles, lequel ne tient pas compte de leur « qualité ». Le problème c’est qu’il a rapidement été démontré que l’indice h est lui-même très fortement corrélé au nombre total d’articles et se révèle ainsi redondant!

Pis encore, il n’a aucune des propriétés de base que doit posséder un bon indicateur. Comme l’ont montré Ludo Waltman et Nees Jan van Eck, l’indice h est en réalité incohérent dans la manière dont il classe des chercheurs dont le nombre de citations augmente de façon proportionnelle. Ces auteurs en concluent que l’indice h « ne peut être considéré comme un indicateur approprié de l’impact scientifique global d’un chercheur »2.

Cet indice mal construit est même dangereux lorsqu’il est utilisé comme aide à la prise de décisions car il peut générer des effets pervers. Un exemple simple suffit à le démontrer. Comparons deux cas de figure : un jeune chercheur a publié seulement cinq articles, mais ceux-ci ont été cités 60 fois chacun (pour une période de temps donnée) ; un second chercheur, du même âge, est deux fois plus prolifique et possède à son actif 10 articles, cités 11 fois chacun. Ce second chercheur a donc un indice h de 10, alors que le premier a un indice h de 5 seulement. Peut-on en conclure que le second est deux fois « meilleur » que le premier et devrait donc être embauché ou promu? Bien sûr que non… On voit ici que l’indice h ne mesure pas vraiment la qualité relative de deux chercheurs et est donc un indicateur techniquement invalide.

Malgré ces défauts techniques rédhibitoires, l’usage de l’indice h s’est généralisé dans plusieurs disciplines scientifiques. Il semble taillé sur mesure pour satisfaire d’abord le narcissisme de certains chercheurs. N’oublions pas que sa diffusion rapide a aussi été facilitée par le fait qu’il est calculé directement dans toutes banques de données et s’obtient donc sans aucun effort! Il est tout de même navrant de constater que des scientifiques pourtant supposés avoir fait des études en mathématiques perdent tout sens critique devant un chiffre simpliste—cela vient confirmer un vieil adage anglais qui a toutes les apparences d’une loi sociale : « Any number beats no number. » En d’autres termes, mieux vaut un mauvais chiffre que pas de chiffre du tout…

Un univers à plusieurs dimensions

Le plus irritant dans les débats sur l’évaluation de la recherche est la tendance à vouloir tout résumer par un seul chiffre. Le simplisme d’une telle démarche devient patent quand on observe que cela revient à transformer un espace à plusieurs dimensions en un espace de dimension zéro ! En effet, un nombre, considéré ici comme un point, est de dimension zéro et combiner différents indicateurs pondérés pour obtenir un seul chiffre fait perdre l’information sur chacun des axes (indicateurs) d’un espace à plusieurs dimensions. Au mieux, si on considère que le point est sur une ligne, on a quand même réduit le tout à une seule dimension.

Or, seule la prise en compte de plusieurs indicateurs différents permet de tenir compte des différentes dimensions d’un concept, tel ceux de qualité et d’impact de la recherche. Ainsi, le milieu académique est d’abord intéressé par l’impact scientifique des publications, mais on ne saurait négliger d’autres types d’impacts pour lesquels on trouve plus ou moins facilement des indicateurs valides. Pensons aux impacts économiques, sociétaux, culturels, environ-nementaux, politiques de la recherche scientifique.

Ainsi, dans le cas des universités, la recherche n’est qu’une fonction de l’institution, et la qualité de l’enseignement ne se mesure pas à l’aune de la recherche, en faisant abstraction de l’environnement dans lequel baignent les étudiants (qualité des édifices, ressources bibliothécaires, etc.). Si l’on veut faire émerger ces dimensions, il faut dépasser le « syndrome du lampadaire » (« lamp-post syndrome »), qui porte à chercher ses clés dans une zone éclairée plutôt qu’à l’endroit précis (mais sombre) où elles ont en fait été égarées. Il est donc nécessaire d’aller au-delà des indicateurs facilement accessibles et de faire des études de cas afin d’évaluer la présence de certains de ces impacts pour chacun des grands indicateurs. C’est une démarche qualitative coûteuse mais indispensable lorsqu’on a l’ambition de mesurer les impacts de la recherche dans
plusieurs secteurs.

Le simplisme des classements atteint son paroxysme avec la publication annuelle des classements des universités, censés identifier les « meilleures » universités au niveau mondial.

Le simplisme des classements atteint son paroxysme avec la publication annuelle des classements des universités, censés identifier les « meilleures » universités au niveau mondial.

Quantifier pour contrôler

Les discussions animées entourant l’utilisation d’indicateurs bibliométriques dans l’évaluation des chercheurs laissent le plus souvent dans l’ombre un aspect pourtant fondamental de l’évaluation, à savoir le rôle de l’expertise des chercheurs dans le processus d’évaluation. La volonté de mieux contrôler le système très ancien d’évaluation par les pairs (peer review), qui repose sur une connaissance de première main du domaine de recherche du chercheur évalué, fait lentement place à l’idée d’évaluation par des experts (expert review) lesquels sont souvent externes au domaine de recherche considéré. L’évaluation quantitative normalisée facilite ce déplacement en fournissant des données soi-disant « objectives » qui peuvent alors être utilisées par n’importe qui. C’est dans ce contexte qu’il faut comprendre la création de classement des revues en A, B et C pour faciliter, sinon mécaniser, l’évaluation individuelle. Cela constitue de facto une forme de taylorisation de l’évaluation, une déqualification de l’expertise nécessaire à l’évaluation.

On est ainsi face à un paradoxe. L’évaluation d’un chercheur exige la constitution d’un comité de pairs qui connaissent bien le domaine. Ces experts savent déjà, par définition, quelles sont les bonnes revues dans leur domaine et n’ont pas besoin d’une liste préétablie par on ne sait quel groupe d’experts les classant en A, B et C. Par contre, ces classements permettent à des personnes ignorant tout d’un domaine de prétendre quand même porter un jugement autorisé. Mais alors ils ne devraient justement pas faire partie d’un comité d’évaluation! La multiplication d’indicateurs mal construits sert donc en fait un processus de contournement de l’évaluation par les pairs, éva-luation qui doit prendre en compte des indices de productivité, mais qui doit les interpréter dans le contexte spécifique de l’évaluation. Que certains chercheurs contribuent à la mise en place de ces classements, comme à l’utilisation d’indicateurs pourtant invalides, ne change rien au fait que ces méthodes ont pour effet de minimiser le rôle de l’évaluation qualitative de la recherche en la remplaçant par des évaluations mécaniques.

Pseudo-internationalisation et déclin des recherches locales

Un aspect peu discuté de l’importance accordée aux facteurs d’impact et au classement des revues est qu’elle détourne indirectement de l’étude de sujets locaux, marginaux ou peu à la mode. Cela est particulièrement dangereux dans les sciences humaines et sociales, dont les objets sont par nature plus locaux que ceux des sciences de la nature. Il va de soi que certains sujets sont moins « exportables ».

Les revues les plus citées étant anglo-saxonnes (et non pas « internationales »), les chances d’y accéder dépendent de l’intérêt que ces revues portent aux objets étudiés. Un chercheur qui veut publier dans les revues les plus visibles a intérêt à étudier l’économie des États-Unis plutôt que les spécificités de la Banque du Canada ou l’économie régionale du Québec, sujet de peu d’intérêt pour une revue américaine. Le sociologue dont l’objet est « international », donc délocalisé, ou qui fait de la théorie a plus de chances d’exporter ses articles que celui qui propose l’étude empirique d’un aspect précis de sa propre société. Mais, si on souhaite étudier l’économie du nord de l’Ontario on risque aussi d’avoir plus de problèmes à « internationaliser » les résultats.

Or est-ce vraiment moins important de se pencher sur cet objet que d’étudier les variations du New York Stock Exchange? Il y a donc un danger réel que les objets locaux mais sociologiquement importants soient dévalorisés et donc, à terme, négligés si les indicateurs de citations sont utilisés mécaniquement sans que l’on tienne compte de l’intérêt social des objets de recherche en sciences humaines et sociales.

Conclusion : juger plutôt que compter

On entend souvent dire que ces classements sont inévitables et qu’il faut «vivre avec». Cela est tout à fait faux. La résistance des chercheurs est tout à fait capable de bloquer de tels projets malavisés. En Australie, notamment, la vive réaction des chercheurs au classement des revues a réussi à faire plier le gouvernement, qui a abandonné l’usage de ces classements pour l’évaluation de la recherche. En somme, le monde de la recherche n’a pas à céder devant des exigences qui n’ont rien de scientifique et appartiennent à des logiques qui lui sont étrangères. D’autant plus que ce sont en fait les revues francophones et les objets de recherche locaux mais très importants pour la société qui sortiront perdantes de ces dérives de l’évaluation.

Yves Gingras est professeur au département d’histoire et titulaire de la Chaire de recherche du Canada en histoire et sociologie des sciences de l’Université du Québec à Montréal.

Ce texte est une version plus courte d’un article intitulé « Dérives et effets pervers de l’évaluation quantitative de la recherche : sur les mauvais usages de la bibliométrie », paru dans la Revue internationale PME 28;2 (2015) : 7-14. Pour une analyse plus approfondie, voir: Yves Gingras, Bibliometrics and Research Evaluation : Uses and Abuses, Cambridge : MIT Press, 2016.

1. Yves Gingras, “How to boost your university up the rankings,” University World News, (2014) July 18;329, http://www.universityworldnews.com/article.php?story=20140715142345754. Voir aussi les nombreuses réactions dans Science, (2012), March 2;335: 1040-1042.
2. L Waltman and NJ van Eck, “The inconsistency of the h-index,” 2011, http://arxiv.org/abs/1108.3901.

Collecting data from students with students

Gathering data on university students can provide important information about how they interact with the postsecondary education system, but it is also important to consult students to determine what data are collected and how.

A few years ago, I was part of an admissions committee that developed a short, voluntary survey for one of our academic programs. The survey responses would not be part of admission decisions, but we hoped to determine if we were making effective changes both to the application process and to our outreach to communities facing barriers accessing educational opportunities.

I teach courses in survey development and measurement theory in which I emphasize the importance of checking that respondents understand the questions as intended and are able and willing to answer them. This can be done by recording a few respondents thinking aloud as they read the instructions and respond to the questions, and by testing the questions with a small sample of respondents. Even better, a small group of respondents might be involved throughout the design process to make sure that the questions asked are appropriate, the terms used are familiar, and the intended uses of the data are clearly explained and acceptable to respondents.

Did we involve students in developing our survey? I am embarrassed to admit that it didn’t occur to us to do that. As faculty and staff who work daily with students, we were confident we knew how applicants would interpret the questions. And as a committee that focuses on equity in the admissions process, we were certain that applicants would believe our assurances about how we would and would not use the data.

Only about half of the applicants responded to the survey—a response rate that would be enviable in much social science research, but was not what the admissions committee needed to evaluate the changes it was making.

Fortunately, this story has a happy ending: Within a couple of years, the response rate increased to more than 90 per cent. We were able to compare the demographics of applicants to the program with the demographics of the wider community and, when we made changes to the application process or to how we made admissions decisions, we were able to see who was affected. That program has since closed, but we are beginning to apply what we learned from that experience to other programs.

All of the credit for this happy ending goes to students. A Master’s student who was interested in equity in education decided to make the survey the focus of her thesis research—not the results of the survey, but the survey itself. She went directly to the students who were currently in the program to find out what they thought of the survey. She led discussions with groups of students and used an anonymous online survey to find out how individual students interpreted the questions and how they believed the responses were used. Based on what she learned, this student worked with the admissions committee to revise the survey’s title, reorder and reword the questions, and rewrite the explanation of how the responses would and would not be used. Other students helped us analyze the data and, over time, suggested further revisions. The eventual success of the survey was due to their work.

Large-scale surveys, such as the National Survey of Student Engagement and the National College Health Assessment, can provide important data about students’ identities, experiences, and perceptions, but there will always be a need to develop surveys for specific contexts. If faculty, staff, and students have a common purpose in improving universities’ programs, why don’t we work together to develop better ways to collect data from the students in those programs?

If faculty , staff, and students have a common purpose in improving universities’ programs, why don’t we work together to develop better ways to collect data.

Time is one reason, I suspect. Even if our admissions committee had not been confident in its ability to develop a survey that students would want to answer, we had not allowed enough time to involve students. Finding students who are interested in being involved can be a lengthy process. Depending on how we want them to be involved, there will need to be time to mentor the students in survey development and include them in meetings to develop the questions, organize the collection and the analysis of initial test responses, and revise and retest the items if necessary.

Money is another reason we often don’t involve students. For our work, however, we have been fortunate to have access to a small amount of money to hire students. As well, some students have been interested in contributing to the development of surveys as a way to gain research experience.

I wonder, though, if there isn’t another reason we don’t involve students in developing these surveys: We believe we know how students think. Or perhaps we don’t believe we know how they think, but we believe that students won’t mind making the effort to understand what we mean by the questions.

One of my favourite books on survey design is Tourangeau, Rips, and Rasinski’s The Psychology of Survey Response. Based on research by cognitive psychologists and market researchers, the authors list 13 steps respondents might take when answering a survey question, beginning with “Attend to the questions and instructions” and including “Identify what information is being sought,” “Retrieve specific and generic memories,” “If the material retrieved is partial, make an estimate,” and “Map the judgment onto the response categories.” These steps assume, of course, that respondents want to provide as accurate an answer as possible. If respondents judge the questions to be unimportant or to require too much effort, however, they may choose not to respond or, worse, may respond randomly. The authors’ findings are not encouraging: Reading the book always leaves me marvelling that anyone ever manages to collect useful survey data.

Nevertheless, surveys are the best tool we have for learning about students’ identities, experiences, and perceptions. We need such data if we are to improve programs. We owe it to our students to create the best surveys we can so that the time and effort they spend responding to them is not wasted. That means collecting data not only from students, but with students.

Ruth Childs is Ontario Research Chair in Postsecondary Education Policy and Measurement at the Ontario Institute for Studies in Education, University of Toronto.