Editorial Matters: Alluring figures

There is an appealing simplicity in numbers. A number’s value is never ambiguous, even if its meaning can be. Numbers are specific and easily compared to one another. They allow us to measure the dimensions of an object or to describe the outcomes of a decision.

There has long been a desire to use numbers to measure the impacts of specific changes on complex social systems. In recent decades, the dramatic increase in the power of computers has transformed the amount of time and energy these calculations require. Computers love numbers. Every decision they make is the result of a lot of very complex math. As a result, it has become possible to collect and run calculations on vast amount of data.

By defining a set of goals and metrics (standards by which to measure and evaluate data), entire industries are attempting to optimize delivery of services. From package delivery, to advertising, to electioneering, the assumption is that gathering and evaluating enough data will reveal ways to improve outcomes.

In Canada, and around the world, data analysis and metrics are increasingly being used in postsecondary education—to evaluate the teaching performance and research output of professors, to rank the performance of postsecondary institutions, to measure the diversity of student bodies, and to track the types of employment students attain after graduation. Performance on these metrics impact faculty tenure and promotion, research pursuits, institutional funding, and enrolment.

However, making decisions based only on metrics overlooks those important aspects of postsecondary education that cannot be quantified. The very act of choosing the metrics used to evaluate success has serious consequences. Metrics reflect the values and priorities of those choosing them, diminishing the importance of the data not being measured.

Does a focus on metrics in higher education serve to optimize postsecondary education systems and make them more accountable? Do metrics distract from other important considerations, undermining the integrity of the system? Do metrics compound systemic biases within institutions or help reveal them so they can be addressed?

In this issue of Academic Matters, our contributors contemplate these questions and critically examine the impact that metrics have on the quality and integrity of teaching and research at universities in Canada, and around the world.

Gavin Moodie provides an overview of the different ways that metrics are used in postsecondary education and explores their unintended consequences, providing readers with a helpful primer on
the issues.

Tim Sibbald and Victoria Handford examine how data inform decisions about tenure and argue for more holistic approaches to evaluation that take into account different lived experiences.

Yves Gingras dives into the world of academic research and questions the very foundations upon which published work is ranked and rewarded. Originally written in French, we have translated the article for our readers and published it in both languages.

Ruth Childs describes her experience designing student surveys, and the importance of consulting students about what data is being collected and how. She reminds us that it is vital to understand the perspectives of those that we are collecting data from if we are to ensure the data is useful.

Claire Polster and Sarah Amsler share their observations of higher education systems in Canada and the United Kingdom, and contrast the ways in which results-driven corporatization has impacted faculty.

Finally, Rob Copeland provides a background on the UK’s new metrics-based Teaching Education Framework and contemplates what its flaws will mean for the future of the country’s universities.

There is value in measuring educational inputs and outcomes in higher education, but it is crucial to be aware of their limitations. Metrics and quantitative data analysis can be valuable tools, but an over-dependence on numbers can have unintended consequences.

By defining results numerically, we invite the ranking of institutions, programs, and individuals; we encourage competition instead of collaboration; and, perhaps most importantly, an over-reliance on metrics devalues those vital aspects of the educational experience that numbers just cannot define. AM

Ben Lewis is the Editor-in-Chief of Academic Matters and Communications Lead for OCUFA.