Karin Ingold, Frédéric Varone, Marlene Kammerer, Florence Metz, Lorenz Kammermann, and Chantal Strotz
This section of Discover Society is provided in collaboration with the journal, Policy and Politics. It is curated by Sarah Brown.
Most researchers struggle with finding the most appropriate way to gather data about political actors, their opinion, ideologies or positions. Often, the choice of data gathering methods is restricted by a lack of time or resources. In the fortunate situation where both, interviews and text or media coding are possible, there still seems to be a lack of understanding on the difference in results that the two produce. In our recent Policy & Politics article, we show the theoretical, empirical, and real-world implications of the choice in data gathering method and timing (before and after final policy decision).
Knowing more about positions or preferences of politically involved actors is relevant for several reasons. The final policy that is introduced is nothing less than the policy preferences of the most successful or powerful actors in a political negotiation process. Studying policy positions means analysing the policy process on one side, but also getting important insights into the design features and content of a specific policy on the other. Policy preferences are somehow the glue between politics (the process) and policy (the content) and thus enhance the understanding of processes such as policy emergence, learning, or change. Policy positions can either serve as independent variables (for example, to explain coordination among or the strategic behaviour of actors), or as dependent variables (for example, to evaluate actors’ coherence over time).
But how do I best identify these policy positions? Should I perform interviews or code the official statements of actors involved in policymaking? How valuable are my survey results in comparison to media data? As mentioned at the beginning, these are typical questions concerning methods of data gathering and there are unlikely to be absolute answers. However, in our article, we tried to contribute to the answering of these questions as we are in the unique position of having systematically gathered data about the same set of actors involved in three different policymaking processes. These have stated their policy positions twice, officially and in a written survey. For all three cases, the written survey takes place just after the official consultation.
The time delay between the two data gathering phases is short but might still provide opportunities for actors to change their policy positions. This is why we have formulated hypotheses about actor types that are specifically sensitive to this time delay (that is, target groups of instruments and losers of the policy process). That the survey takes place after the consultation and when the policy is already introduced can be considered a standard situation in policy studies, as political actors are reluctant to answer surveys during an ongoing process.
We use data from three policy subsystems (climate, energy, and water protection) in Switzerland and test our hypotheses using descriptive statistics, OLS regression, and multi-level models. We thereby keep the larger (Swiss) institutional context constant.
Interestingly, the results show a general pattern of actors demonstrating a tendency to value policies more positively in the (later) survey situation than in the (earlier) official statement. It is mainly the losers in the policy process who tend to improve their assessment between the consultation phase and the survey phase, and therefore show a higher discrepancy than other actors. This might be an indicator of a ‘correction’ of their position once they know about their policy defeat.
Nevertheless, this is not a trivial result. First, it shows that the timing of the data gathering seems to be relevant. Second, when thinking about theories like the Advocacy Coalition Framework (Sabatier and Jenkins-Smith, 1993), which conceptualises policy beliefs and preferences as stable, this result reveals at least two interesting rationales: either actors in the survey situation correct their positions and thereby act through mechanisms of social desirability; or, policy positions are not as stable as some frameworks might predict.
The exceptions to the trend of evaluating instruments more positively in the survey than in the consultation are also interesting: target group actors evaluate relevant policy instruments more negatively and thereby also demonstrate a higher discrepancy between officially- and survey-stated policy positions than other actors (however, our models show no significant or large effect for the target group predictor variable).
What are the broader implications of these empirical findings for (comparative) policy studies? As losers of the political game seem to have a systematic tendency to improve their instrument assessment between consultation and survey, it is worth reflecting about who the losers of the process might be, particularly when only working with survey data. This innovative finding is highly significant as policy analysis aims at knowing ‘who gets what, when, how’ (according to the seminal question asked by Lasswell, 1956). Indeed, it is more difficult to accurately know ‘who gets what’ if one cannot fully trust how actors that were defeated during a policy battle will remain consistent in reporting their disposition or if they will have a tendency to quickly change policy positions.
Furthermore, as our research demonstrates, actors have a tendency to change their positions after the introduction of a policy, so the timing of data gathering seems to be crucial. Thus, the effects of social desirability and the “correction” of the actor’s own position can change before and after a policy is introduced. In short: timing and actor-type matters when researchers are reaching conclusions about policy positions, their relevance for policymaking, or their stability over time.
References:
Lasswell, H.D. (1956) The decision process, College Park, MD: University of Maryland Press.
Sabatier, P.A. and Jenkins-Smith, H.C. (1993) Policy change and learning: An advocacy coalition approach, Boulder, CO: Westview Press.
Karin Ingold, Marlene Kammerer, Florence Matz and Lorenz Kammermann are respectively professor, postdoctoral researchers and researcher at the Institute of Political Science at the University of Bern. Frédéric Varone is professor in the Department of Political Science and International Relations at the University of Geneva. Chantal Strotz is a researcher at the University of Lucerne