Behavioural insights teams in practice: nudge missions and methods on trial

Behavioural insights teams in practice: nudge missions and methods on trial

Sarah Ball and Brian Head

They go by a variety of names; nudge units, behavioural insights teams and behavioural economics teams. However, they all owe a debt to the pioneering work of the Behavioural Insights Team (BIT) in the United Kingdom (UK). Based on behavioural research on ‘irrational’ behaviours, ‘nudge’ instruments have been tested through rigorous research in the form of randomised controlled trials. Using this approach, the BIT UK has had a significant impact on the policy innovation landscape across the globe. Teams have emerged in Europe, the US, Canada, Japan, Singapore, Saudi Arabia, Peru, Australia, New Zealand and many more countries. They have also had a significant role in responses to the Covid-19 pandemic.

Our research recently published in Policy & Politics explores the behavioural insights phenomenon as it emerged in Australia, from which we derive analysis relevant to global actors and governments engaged in behavioural insights. In two independent exploratory studies, we sought to understand how such teams actually operate in practice. One study was an in-depth observational study of staff in the Behavioural Economics Team of the Australian Government (BETA). The other was an interview-based study of three teams, namely, those operating in two state governments, New South Wales and Victoria, together with the Australian government’s BETA.

Our findings highlighted that the scientific persuasiveness of evidence drawn from RCTs played a significant role in the promotion of the behavioural approach in government. This sometimes led to the exclusion of other methods and to the exclusion of policy issues not amenable to behavioural trials. This scientific focus permeated many of the work practices across all three teams. Staffing and training of the teams focused on the attainment of technical expertise in designing and delivering trials. Their methodology shaped how they selected projects and how the projects were designed. In most cases the teams recognised the need for more qualitative or exploratory methods for gathering contextual information, but these activities could easily be disrupted by time pressures and by the focus on delivering a trial.

The teams acknowledged that RCTs were not a universal solution to all research questions and policy puzzles. However, for many who were interviewed for our research, there was a determined focus on particular methods, known in one team as an ‘RCT or the highway’ approach.

This focus on RCTs helped nudge units gain credibility and legitimacy in policy making circles. But our research also demonstrates that there was a growing awareness that the RCT focus might constrain the development and delivery of useful projects. First of all, team members generally understood the importance of the political environment of policy making and understood that their commitment to RCTs might limit them to studying less complex or contested issues. Second, there was a significant risk of appearing technocratic or unethical in the eyes of partners or stakeholders. The use of more technical language, requiring analytical expertise, can lead to a decrease in transparency for the public, and even within government itself. Some of the language about the superiority of ‘gold-standard’ RCTs tended to sideline professional and stakeholder knowledge from policy analysis, equating ‘experience’ with anecdotal or intuitive evidence. However, support and engagement of citizens, professionals and stakeholders is often critical in building support for policy interventions.

Our research accepts that RCT research methods can provide useful opportunities for learning about ‘what works’. But our research shows that while individual members of BI teams may privately acknowledge the limits to RCTs, the official narrative of scientific authority remains at the core of the value proposition for BI teams.

From our research, we noted two concerns that may inhibit the long-term impact of BI on government policymaking. First, it is important to acknowledge the limitations placed on the scope of research questions, both methodologically and politically, when projects are centred on conducting RCTs. A renewed focus on better linking the micro and macro levels of analysis could address some of these concerns (see the work of Ewert, Loer and Thomann in the forthcoming special issue of Policy & Politics on Behavioural Insights).

Second, reliance on RCT-based science downplays the political context of policymaking, as well as the widely recognised need to build civic trust by moving away from technocratic and opaque government processes. The scientific nature of RCTs is supposed to promote objectivity and ‘depoliticise’ the evaluation of policy options; however, this masks the political context of selecting research questions and instruments, and the use of RCTs has largely proceeded on the basis of excluding the voices of users and citizens.

The fast-moving pace of policy debate and competition among the champions of different models for policy development, suggest that behavioural insights teams may need to learn to work with broader conceptions of knowledge and experiential expertise if they wish to remain influential.

Reflecting on this, following the Coronavirus pandemic, a further lesson has emerged. This lesson is that if these teams are to expand beyond a reliance on RCTs they need to be very cautious about how they do so. While the Australian behavioural insights teams have been relatively silent, publicly at least, on the Australian government’s policy response to the crisis –the BIT UK has been far more closely involved with the UK government’s decision making.  The team has since come under fire for one suggestion, namely that the UK government delay lock down in order to avoid behavioural fatigue. While it is possible, that people will experience a frustration and eventual resistance with the lock down, the evidence was argued to be insufficient and this has led to a backlash in the media. It was never going to be possible to trial the concept of behavioural fatigue which introduced a significant risk.  This backlash has raised questions about the ethics of behavioural approaches more broadly.

This hardly means that there is no value to be found in behavioural science without RCTs. There are several existing studies into hand washing practices that can, and are being utilised, and these teams have accrued significant evidence of their own regarding the framing of government communication to improve comprehension and uptake. Process mapping or analysis of administrative data could also be undertaken to determine what barriers citizens face in trying to access services.

While our paper encourages behavioural insights teams to expand their view beyond the use of trials, there are also inevitably boundaries to what is considered appropriate when it comes to the use of behavioural interventions in government, if only for the perception of misuse. When RCTs are not part of the equation, a significant degree of caution is not only advisable – it is likely to be essential. Co-author of Nudge, Cass Sunstein, and Lucia Reisch have, for example, proposed a Bill of Rights for nudging in their recent book Trusting Nudges. These include aligning the interventions with people’s values and interests, pursuing transparency and finding ways to ensure consent. Behavioural insights teams may find these concepts a useful guide for the public acceptability of behavioural interventions – with or without RCTs.


Sarah Ball is currently working on her PhD at the Institute of Social Science Research at the University of Queensland. Brian Head is Professor in the School of Political Science and International Studies and Acting Director of the Centre for Policy Futures at the University of Queensland.