Be careful what you wish for: The end of the NSS?

Be careful what you wish for: The end of the NSS?

Órla Meadhbh Murray

For those who work or study in UK universities, 2020 has been a tumultuous year; strikes, COVID-19, redundancies, and irresponsible management decisions around face-to-face teaching during a global pandemic. Amidst all this, the UK government released a policy report in September – ‘Reducing bureaucratic burden in research, innovation and higher education’ – announcing that the National Student Survey (NSS) would undergo a “radical, root and branch review” by the Office for Students, to be completed by the end of 2020. This potentially marks the end of the fifteen year reign of the NSS, a yearly student satisfaction survey for final year undergraduates that has weathered multiple reviews and an National Union of Students (NUS) boycott in 2017. In this year of chaos such news might seem unimportant, but I argue that the NSS is an important part of the reason why UK university management have made such risky decisions throughout the coronavirus crisis around face-to-face teaching and staff cuts.

When I started teaching as a postgraduate tutor in sociology at the University of Edinburgh in 2013, the spectre of ‘bad NSS scores’ was regularly invoked as something that needed to be fixed. I became very interested in this as I’d just begun my PhD, researching UK university audit processes. As I began researching the survey and its uses, I realised that this NSS anxiety was not unique to Edinburgh; as Duna Sabri puts it, the NSS “has acquired significance that far outweighs its validity or intended use”. The NSS initially provided helpful national data on student concerns – highlighting, for example, a widespread dissatisfaction with feedback. However, the survey quickly provided an anxious focal point for management gaming and subsequent pressures put on front-line staff. By gaming, I mean pouring excessive amounts of money, time, and effort into trying to change the NSS results, sometimes in manipulative ways that focus on the survey itself rather than problems it supposedly identifies.

As if competing in a professional sport, universities have hired consultants to advise on improving scores and created entirely new areas of work and jobs relating to the survey to get the one up on their competitors. This pressure filters down to front-line teaching and professional staff, some of whom are tasked with recruiting students to fill in the survey. Others have sought to inappropriate influence the survey results, such as a case in 2008 from Kingston University in which lecturers told students it would harm their graduate employment prospects if they gave lower scores. While inappropriate, they were not wrong, because the survey results do have enormous consequences – both on university finances and reputations – through their use in university league tables and rating systems, which prospective students use when choosing courses.

Alongside the use in university leagues tables, such as the Guardian’s annual university guide, NSS scores are the first measure on the government course comparison website Discover Uni and provide three of six main metrics in the Teaching Excellence and Student Outcomes Framework (TEF). These resources affected university finances in a number of ways, but I will focus on how they affect student fees income. Students use these sources to inform courses choices, and because student fees fund universities, this decision carries financial weight. Since 2015, when the government removed the cap on student numbers popular universities were able to accept more students and increase their income from fees, often at the expense of less popular institutions. While capping student numbers has been partly introduced in response to COVID-19 and ‘conditional unconditional’ offers banned by the Office for Students, this is completely insufficient. The entire funding mechanism is broken, having been dubbed a marketised or neoliberal higher education system in which universities compete like businesses undergoing branding exercises to attract student-consumers.

For over a decade, the NSS has been a powerful metric in a sea of university bureaucracy. The use of quick and easy comparative statistics – 87% satisfaction is better than 85% – and their use in league tables overemphasising miniscule differences in scores that have no significant relationship to actual life. A few disgruntled students filling in the survey can knock a course or a university down the league tables, potentially affecting future student numbers and fee-based income. With this pressure on institutions, students’ demands are taken more seriously as university compete to provide an array of support they are often ill-equipped to provide. While a rebalancing of power in universities would be welcome, this does not mean students are involved in democratic decision-making processes nor does it necessarily mean students are adequately supported. Instead it has funnelled money into attempts to improve student recruitment and NSS scores, which is a complex and messy thing to ascertain.

Students have very different expectations from universities, which are affected by their background and their identity. Many students will experience systemic racism, financial insecurity, or sexual violence which are sector wide (and societal) issues. And yet, these incredibly significant experiences in students’ lives are not captured by the NSS alongside many other major issues in the sector because of the limited focus of the questions. The structural issues of the sector – overly large classes, lack of affordable guaranteed accommodation, and insufficient meaningful time with teaching staff – are produced by the competitive higher education funding system yet hard to identify from the survey questions. There is no room to criticise student funding structures, institutional racism, mismanagement, or government policy, except in open-text boxes at the end of the survey which are not publicly released. While metrics like the NSS exist within a competitive funding structure, the battle for students and their fee-based income incentivises universities to spend significant amounts of money on short-termist student satisfaction gimmicks.

Given these criticisms, you might think that I welcome the government’s 2020 ‘root and branch’ review of the NSS, but there is little reason for celebration. While the report highlights that the NSS is open to gaming and generates costly and time-consuming bureaucracy, it does not acknowledge the broader funding structure that drives this. It mentions that the NSS is used in league tables, but argues the issue is that this somehow encourages students choosing “courses that are easy and entertaining, rather than robust and rigorous”, bizarrely putting student satisfaction and quality courses in opposition to each other. It identifies drop-out rates and graduate employment as “more robust, measures of quality” highlighting the lack of correlation between student satisfaction and these metrics.

But drop-out rates and graduate employment are not the same as measuring quality of provision, and as Camille Kandiko Howson and Paul Ashwin both argue graduate employment data is more likely to indicate student demographics and socio-economic status. Student employability statistics are a better indication of students’ class background and elite universities reproducing highly-paid professionals than it is of some magical career-making quality in the degree programmes. Retention rates and comparisons between entry requirements and degree classifications incentivise universities to both manage degree classifications for reputation purposes and not widen participation through flexible or contextual entry requirements. In short, simple metrics do not adequately measure the complexity of what ‘quality’ means in university provision nor will they capture the elusive student experience better than having meaningful ongoing democratic structures within universities. And so, while the NSS has many issues, these potential alternative proxy metrics will be no better.

The precarious fee-based funding structure of UK universities is the underlying issue in the sector that produces many of the issues students are dissatisfied with: uncapped student numbers, lack of funding except through highly competitive student fees, reliance on extortionate international student fees, and extremely competitive and time-consuming research funding processes. University management are incentivised by the competitive funding structure, and so alternative metrics will not change attempts to game audit processes. The fundamental logic of producing student satisfaction statistics and other numerical metrics is to facilitate comparison, which can always be used simplistically in league tables and comparison sites. Thus, whether it is the NSS or something else, the systemic issues will continue to pressure universities to spend more money on fancy buildings and branding exercises than on employing enough staff on secure contracts. This funding structure is why universities have consistently perpetuated the illogical falsehood that opening campuses and having face-to-face teaching is safe during the COVID-19 pandemic – the student must be satisfied, even if people die.

Rather than spending so much time and money on stressful audit processes that will always be open to gaming and abuse, the sector should move towards more democratic institutions in which staff and students have meaningful roles in decision-making, and the students’ unions and trade unions are equal partners in the navigation of staff and student concerns. If all audit processes were eliminated tomorrow, more funded divided equally between institutions relative to size, and capped student numbers stopped the competitive hoarding of fee-paying students by some universities to the detriment of others, then the sector would be able to provide a much better educational experience to all students. Staff should be securely employed with appropriate workloads so they can provide meaningful teaching and support to students. If we want a truly radical approach to the NSS, it must the first of many audit processes to fall and the death knell to competitive funding structures and league tables for UK universities.

 

Órla Meadhbh Murray is a feminist sociologist interested in the politics of knowledge, organisations and audit processes, and higher education. She is currently a postdoctoral research assistant on the SIDUS project at the Centre for Higher Education Research and Scholarship, Imperial College London, and is writing a monograph based on her PhD, a feminist institutional ethnography of UK university audit processes. She tweets @orlammurray.

Image CreditMark Blevis on Wunderstock (license)