Roger Taylor
Are people being driven to suicide by the stress of compulsory medical tests for sickness benefits? Are police in the US racially biased in the use of lethal force? Are statins saving thousands of lives or poisoning swathes of the population? Are you being overcharged for credit?
Enough questions? One more. What do these questions have in common? Not the fact that they are all matters of public interest, nor that they have recently been in the news, nor that you will find people with strong opinions on either side of the debate. There is one other thing that unites them – in each case the data needed to address the question is, to a significant degree, controlled by organisations which have much to lose if they get an answer that does not suit their interests.
The growth of information technology is sometimes seen as heralding a new age of openness and transparency – that it has empowered the ordinary citizens. There are plenty of examples to back this view up. It is true that social media contributed to the Arab Spring; that you can set up a global business in your front room; and that airline prices are easier to compare.
The continuous scrutiny of public institutions through 24-hour news, the rise of blogging and social media, the fact that you can put anyone’s name into Google and likely find out something about them – these all contribute to the sense that the world is becoming more transparent, more open and less private.
But there is an alternative view – an opposing view– in which the growth of big data is having the opposite effect. That it is making the world less transparent.
The starting point for this point of view is that openness and transparency cannot be measured in terms of the amount of information available to me. What matters is how much information I have compared to how much you have. Transparency fails us when one side is able to take unfair advantage of the other through better access to information. From that perspective, what matters is not the amount of information in the public domain; what matters is the relative access to information between people on opposite sides of an argument, negotiation or transaction.
Ask yourself this – has the growth of digital technology enabled you to know more about the state or the state to know more about you? Has the computerisation of banking enabled individuals to know more about banks or banks to know more about their customers?
Information technology may have increased the level of information in the public domain. But that increase is as nothing compared to the increase in the information assets available to the organisations and agencies with which we interact – the information that is used to inform decisions, justify policies and monitor the impact of everything from advertisements to tax laws.
If government agencies and businesses can use monopoly control of data sets to limit or shape the information around us and the types of narrative that enter into the public domain, there is a sense in which the world is becoming more opaque and forcing us to put more and more trust in institutions.
The risks this poses can sometimes appear more theoretical than real and many people are entirely sanguine about the issue. After all, the benefits we are getting from better use of data seem to be worthwhile. At the fringes, there is strong agitation from the privacy lobby to limit, in particular, commercial use of data about individuals. But the political traction that the campaign can generate is limited by the degree to which people feel that the good has outweighed the bad. The fact that Google finds what I am looking for, seems more important than any concern about how they did it or fear that their algorithms might be generating biased results. I may not know how they use information about me but it seems to work and that’s good enough.
In medicine, growing sophistication in the collection and analysis of data about people’s state of health is, in the main, credited with being wholly beneficial. We are quicker to spot emerging outbreaks and new diseases. We are more precise in our diagnoses. We are discovering new treatments.
Here, the risks are more visible. The sheer complexity of modern medicine mean it is increasingly unlikely that an individual doctor could ever be expected to have a complete understanding of my healthcare. It means I am more likely to receive sub-optimal care because optimal care is so complex and difficult to ascertain and the decision processes so opaque.
Ben Goldacre has written with great power about the lack of transparency within the pharmaceutical industry and the very real costs in terms of lives and ill-health that result. As the range of data about health increases with gene sequencing, phenotyping and wearable sensors, the failure to correctly interpret the data has the potential to become one of the major risks to my health. Without open sharing of data, there is a real risk that I will get the wrong information or the wrong treatment.
Even if I get the right information, lack of transparency poses a further risk – that I will refuse to believe it and start to mistrust the institutions on which I rely.
Take the example of statins. Over the last few years there has been a fierce debate between those who recommend that large numbers of people with high cholesterol should start taking statins to prevent future possible cardiovascular problems; and those who believe that this is based on distorted data created by the pharmaceutical industry to medicate healthy people with potentially dangerous drugs.
Attempts to quash the debate with definitive information – the most recent being a paper in the Lancet – have often foundered on a lack of trust. The BMJ, which published a paper criticising statins and subsequently published a correction, has called for greater transparency and complained that the data needed to understand the question is not being shared with those who are critical of statins. Others have complained that we cannot trust the data itself because most of it was created by the pharma industry which has a track record of defining and capturing data in ways that flatter the drugs they are producing.
The Black Lives Matter campaign in the US has raised similar issues. The campaign has focussed on police shootings of African-Americans. There is much evidence that the police, and indeed the entire criminal justice system, are racially biased. However, the question of whether that bias extends to lethal shootings is more contentious. Some detailed analyses find that, perhaps ironically, lethal shootings may be an exception and an area where the use or misuse of such force affects white people as much as black.
But any such evidence is subject to suspicion because the police themselves and other government agencies have so much control not only over access to information but also over the way in which information about incidents is recorded, stored and retrieved. Public protest has forced the police to increasingly release video images of these incidents which has had a powerful impact on public perceptions. And the mere existence of such information creates powerful incentives – a recent UK study found that the use of dash cams or body cams by police reduced complaints by an astonishing 93%. But the majority of the information about police activity from stopping and arresting people to shootings is under the control of the police and the government bodies that commission them.
When people call for greater openness and transparency, the response of government is typically to commission an investigation or publish additional information. They agree to greater transparency and demonstrate this by managing a process by which additional information is published. These initiatives have often been beneficial. But they often leave those complaining about a lack of transparency dissatisfied.
The story of Work Capability Assessment illustrates the point. Work Capability Assessments are tests that those people claiming benefits must undergo periodically to assess whether they are unable to work. The policy has been justified less on the grounds that people are claiming benefits unfairly, and more on the grounds that it is good for claimants. Prolonged worklessness is associated with worsening health and a significant increase in the risk of depression and suicide. There are good reasons to try to ensure people do not remain on benefits longer than necessary.
The introduction of WCA prompted immediate and widespread complaints that the tests were unreliable, demeaning and being applied in situations which were plainly inappropriate. People said the stress of being threatened with financial destitution was damaging their mental health. The government responded by appointing an independent investigator to look into complaints which resulted in a series of reports making a number of recommendations on how to improve WCA.
In 2015, researchers at Universities of Liverpool and Oxford published a paper which showed a correlation between areas where people were taking tests and areas with higher rates of suicide. They had not been able to look at the information about the individuals who had taken a test and had had to rely instead on data about the average number of tests in an area and compare it to the average level of suicide. This left the analysis open to significant uncertainty, as the researchers acknowledged, over the degree to which the tests themselves were responsible for the suicides. The government rejected the implication as unproven.
Researchers investigating the question were not allowed access to the necessary data – held within government – to check whether or not their concerns were true. When the Select Committee for Work and Pensions were asked that the data be made available Ian Duncan Smith replied that it would not be a good use of government resources to pull together the necessary data.
Transparency has been embraced by governments and corporations around the world as a ‘commitment’ or as a ‘core value’. It is offered up as the answer to every imaginable evil from corporate fraud, government corruption and child abuse by the church to medical malpractice, transport safety and overpriced utility bills.
But while governments and corporations are often willing to establish mechanisms by which particular information and particular narratives can come into the public domain; they have been less enthusiastic about allowing others access to the data necessary to test alternative narratives.
This is worrying. Because transparency, like freedom of speech and democracy, properly refers to something fundamental to a free and fair society. It is an idea that is more important than ever in the information age when surveillance capitalism, the database state and artificial intelligence are becoming the driving forces in business, politics and technology. We need a concept of transparency that is fit for the information age.
Our book, Transparency and the Open Society (co-authored with Tim Kelsey) surveys the current application of transparency around the world, assesses the strengths and weaknesses of different approaches and argues for the need to develop new approaches to transparency that can cope with advances in data systems and information technology.
The starting point is the idea of fairness. The reason why people care about transparency is because they worry about unfair treatment. Calls for transparency are driven by the fear that people are being duped or mistreated. Transparency only addresses that problem to the degree it gives people the ability to assess whether or not the organisations and social institutions they interact with are operating in a fair manner.
Traditionally transparency has been defined in terms of the behaviour of institutions with those that provide more information regarded as more transparent than those that do not. But if we define transparency in terms of the degree to which information is of value to the citizen it is quickly clear that access to more information does not increase transparency. For example, if I can only access information presented in ways that support a particular narrative – a narrative that conflicts with what I believe to be true – it doesn’t matter how much of it is put in the public domain, it does nothing to help me. Indeed, at times, some transparency initiatives appear as attempts to steer the public narrative in a particular direction and disempower certain communities rather than empower people.
Law courts are one of the few places that explicitly aim to create an environment of equal access to information in order to prevent either side in a trial having an unfair advantage over the other. In theory, anything material that one side has access to must be made available to the other. This is the law of discovery or disclosure in US and UK legal systems.
A tactic used by lawyers to undermine disclosure is over-disclosure. If you know that there is one file containing a vital piece of information that helps the other side’s case, or one witness with a crucial statement to make, you can protect yourself by releasing not the one file but rather 15,000 files with that one hidden amongst them; and to release the name of the witness along with 15,000 names of other potential witnesses. That way you comply with the law, but leave your opponent with little chance of finding the key pieces of information.
Equally, courts insist that statements made by one side must be open to cross examination by the other. But if an organisation controlling information uses it to present evidence to support their case and then prevents others cross examining that same data, it can have the effect of decreasing transparency by enabling the organisations supposedly being transparent to increase its informational advantage by placing claims in the public domain that cannot be challenged.
The courts have recognised the issue and have been willing to enforce complete openness of data on institutions. For example, in the US when car finance companies were sued for racial discrimination, the courts forced them to hand over the complete loan book data with financial records about every customer to the plaintiff’s lawyers. The loan companies had strongly denied any racial bias in their lending. Access to the full customer data made it possible to demonstrate that African-Americans were being systematically disadvantaged and that the reasons offered to try to account for this were flawed.
Transparency then, is not about the quantity of information available, it is about the relative degree of control over that information between parties with conflicting interests concerning what it can be used to evidence. Control of information is about much more than access. Control of information relates to the whole production and supply chain from the infrastructure that supports the recording, storage and use of information. Lack of transparency occurs when one side has an advantage that they can unfairly exploit at any stage of that process.
Furthermore, if transparency is about trying to create a level playing field between parties with conflicting interest, it follows that transparency is not just about access to data. It must also encompass individual and institutional capabilities to use that data. Freedom of speech in a modern democracy would mean little without the existence of media organisations. It is their role in collecting, analysing, disseminating and commenting on information that leads to an effective public discourse about political arrangements.
We now live in a surveillance society. The word is usually associated with oppression and fear so it is worth reminding ourselves how fantastically useful surveillance can be. Surveillance is how Google is able to predict which of the billions of bits of information on the internet is most likely to be the one you are interested in. Surveillance can be used to expertly calibrate risks to people financial circumstances. It can reveal the nature of disease and potential cures. It can help governments protect public safety.
The problem is that it is equally effective as a tool to suppress dissent, to mis-sell investments and drugs; or to manipulate the media environment and the public narrative.
This poses a problem for democratic accountability. The media and freedom of the press have provided a powerful mechanism to collect information and hold to account the institutions around us. But the information they work with is interview and hearings, leaks and demonstrations, documents and press releases. Today, an ever growing proportion of the information of most interest about our institutions comes in the form of large data sets that require specialist skills to interpret. The challenge of holding to account such institutions is beyond the capabilities of even the most sophisticated media organisations.
Furthermore, the task is becoming exponentially harder. Currently, the algorithms that use data to decide whether we should be treated one way or another are reasonably fixed over time. They may develop and shift. But at any one moment you can ask to see how data is used and the statistical evidence that supports or undermines a particular policy, prescription, financial judgement or clinical diagnosis. This is true even of the more sophisticated machine-driven systems, whether it be an algorithm that assigns a penalty to a number plate caught on a police camera or an algorithm that determines whether your blood pressure puts you in a category needing medication.
Machine-learning systems change that relationship. With machine-learning algorithms, only the initial parameters within which the machine creates the algorithm are fixed. The algorithm itself can be allowed to constantly adapt to new information. There is no longer a relationship between data about an individual and a fixed algorithm which is justified by reference to a fixed data set. Instead we move into a world where the data is continually shifting and the algorithm applied to information about an individual may be entirely unique – the one-off product of that individual’s data at a specific moment in relation to a constantly adapting data set at a specific moment in time. Assuring ourselves that we are happy to consent to be governed by such systems becomes both a more complex and more simple task. More complex because interpreting information in such circumstances can be harder. But simpler because there is less that you can feasibly know about such systems.
There are huge benefits to such technology and we should embrace its ability to learn and improve. But one thing it does not do is increase transparency. The challenge for information driven societies is how to create institutions that allow us to benefit from the astonishing opportunities that data technology offers without allowing it to be turned against us. At one extreme there is an argument that protection lies in regulation and legal oversight. At the other extreme is the argument that privacy as our only protection and we must reject surveillance and all its potential benefits.
Neither is appealing. Instead we need to look at how control over data can be democratically shared in a way that gives the power to create narratives, tools, arrangements in which the ability to see our society for what it is allow us to enjoy the vast benefits of surveillance but on terms that I can trust to operate fairly.
Transparency and the Open Society looks at mechanism that can be used to limit commercial or governmental monopoly control over data assets. There are, of course, cases where such control is necessary for national security reasons. There may equally be instances where such control is trivial and the costs of ending it outweighs the benefit. But where there are significant public risks and limited public benefit from such control, we need to have a mechanism by which it can be broken and data shared. The only surety that society can accept these powerful and valuable technologies as beneficial is an opening up of access to and control over data assets.
Without this we are either blind to the impact of our social institutions or we give those who operate them a monopoly control over the public narrative about their benefits. The end result can be a world in which there is an official narrative about the effectiveness and fairness of public institutions that ceases to ring true to people, and which seems to contradict the evidence of their own eyes and ears.
Roger Taylor is co-author (with Tim Kelsey) of Transparency and the Open Society, published by Policy Press. He co-founded Dr Foster, the healthcare business, and is founder and chair of the Open Public Services Network at the Royal Society of Arts.