Gathering data on university students can provide important information about how they interact with the postsecondary education system, but it is also important to consult students to determine what data are collected and how.
A few years ago, I was part of an admissions committee that developed a short, voluntary survey for one of our academic programs. The survey responses would not be part of admission decisions, but we hoped to determine if we were making effective changes both to the application process and to our outreach to communities facing barriers accessing educational opportunities.
I teach courses in survey development and measurement theory in which I emphasize the importance of checking that respondents understand the questions as intended and are able and willing to answer them. This can be done by recording a few respondents thinking aloud as they read the instructions and respond to the questions, and by testing the questions with a small sample of respondents. Even better, a small group of respondents might be involved throughout the design process to make sure that the questions asked are appropriate, the terms used are familiar, and the intended uses of the data are clearly explained and acceptable to respondents.
Did we involve students in developing our survey? I am embarrassed to admit that it didn’t occur to us to do that. As faculty and staff who work daily with students, we were confident we knew how applicants would interpret the questions. And as a committee that focuses on equity in the admissions process, we were certain that applicants would believe our assurances about how we would and would not use the data.
Only about half of the applicants responded to the survey—a response rate that would be enviable in much social science research, but was not what the admissions committee needed to evaluate the changes it was making.
Fortunately, this story has a happy ending: Within a couple of years, the response rate increased to more than 90 per cent. We were able to compare the demographics of applicants to the program with the demographics of the wider community and, when we made changes to the application process or to how we made admissions decisions, we were able to see who was affected. That program has since closed, but we are beginning to apply what we learned from that experience to other programs.
All of the credit for this happy ending goes to students. A Master’s student who was interested in equity in education decided to make the survey the focus of her thesis research—not the results of the survey, but the survey itself. She went directly to the students who were currently in the program to find out what they thought of the survey. She led discussions with groups of students and used an anonymous online survey to find out how individual students interpreted the questions and how they believed the responses were used. Based on what she learned, this student worked with the admissions committee to revise the survey’s title, reorder and reword the questions, and rewrite the explanation of how the responses would and would not be used. Other students helped us analyze the data and, over time, suggested further revisions. The eventual success of the survey was due to their work.
Large-scale surveys, such as the National Survey of Student Engagement and the National College Health Assessment, can provide important data about students’ identities, experiences, and perceptions, but there will always be a need to develop surveys for specific contexts. If faculty, staff, and students have a common purpose in improving universities’ programs, why don’t we work together to develop better ways to collect data from the students in those programs?
If faculty , staff, and students have a common purpose in improving universities’ programs, why don’t we work together to develop better ways to collect data.
Time is one reason, I suspect. Even if our admissions committee had not been confident in its ability to develop a survey that students would want to answer, we had not allowed enough time to involve students. Finding students who are interested in being involved can be a lengthy process. Depending on how we want them to be involved, there will need to be time to mentor the students in survey development and include them in meetings to develop the questions, organize the collection and the analysis of initial test responses, and revise and retest the items if necessary.
Money is another reason we often don’t involve students. For our work, however, we have been fortunate to have access to a small amount of money to hire students. As well, some students have been interested in contributing to the development of surveys as a way to gain research experience.
I wonder, though, if there isn’t another reason we don’t involve students in developing these surveys: We believe we know how students think. Or perhaps we don’t believe we know how they think, but we believe that students won’t mind making the effort to understand what we mean by the questions.
One of my favourite books on survey design is Tourangeau, Rips, and Rasinski’s The Psychology of Survey Response. Based on research by cognitive psychologists and market researchers, the authors list 13 steps respondents might take when answering a survey question, beginning with “Attend to the questions and instructions” and including “Identify what information is being sought,” “Retrieve specific and generic memories,” “If the material retrieved is partial, make an estimate,” and “Map the judgment onto the response categories.” These steps assume, of course, that respondents want to provide as accurate an answer as possible. If respondents judge the questions to be unimportant or to require too much effort, however, they may choose not to respond or, worse, may respond randomly. The authors’ findings are not encouraging: Reading the book always leaves me marvelling that anyone ever manages to collect useful survey data.
Nevertheless, surveys are the best tool we have for learning about students’ identities, experiences, and perceptions. We need such data if we are to improve programs. We owe it to our students to create the best surveys we can so that the time and effort they spend responding to them is not wasted. That means collecting data not only from students, but with students.