Governments’ use of automated decision-making systems reflects systemic issues of injustice and inequality

By Joanna Redden, Associate Professor, Information and Media Studies, Western University, Canada

In 2019, former UN Special Rapporteur Philip Alston said he was worried we were “stumbling zombie-like into a digital welfare dystopia.” He had been researching how government agencies around the world were turning to automated decision-making systems (ADS) to cut costs, increase efficiency and target resources. ADS are technical systems designed to help or replace human decision-making using algorithms.

Alston was worried for good reason. Research shows that ADS can be used in ways that discriminate, exacerbate inequality, infringe upon rights, sort people into different social groups, wrongly limit access to services and intensify surveillance.

For example, families have been bankrupted and forced into crises after being falsely accused of benefit fraud.

Researchers have identified how facial recognition systems and risk assessment tools are more likely to wrongly identify people with darker skin tones and women. These systems have already led to wrongful arrests and misinformed sentencing decisions.

Often, people only learn that they have been affected by an ADS application when one of two things happen: after things go wrong, as was the case with the A-levels scandal in the United Kingdom; or when controversies are made public, as was the case with uses of facial recognition technology in Canada and the United States.

Automated problems

Greater transparency, responsibility, accountability and public involvement in the design and use of ADS is important to protect people’s rights and privacy. There are three main reasons for this:

  1. these systems can cause a lot of harm;
  2. they are being introduced faster than necessary protections can be implemented, and;
  3. there is a lack of opportunity for those affected to make democratic decisions about if they should be used and if so, how they should be used.

Our latest research project, Automating Public Services: Learning from Cancelled Systems, provides findings aimed at helping prevent harm and contribute to meaningful debate and action. The report provides the first comprehensive overview of systems being cancelled across western democracies.

Researching the factors and rationales leading to cancellation of ADS systems helps us better understand their limits. In our report, we identified 61 ADS that were cancelled across Australia, Canada, Europe, New Zealand and the U.S. We present a detailed account of systems cancelled in the areas of fraud detection, child welfare and policing. Our findings demonstrate the importance of careful consideration and concern for equity.

Reasons for cancellation

There are a range of factors that influence decisions to cancel the uses of ADS. One of our most important findings is how often systems are cancelled because they are not as effective as expected. Another key finding is the significant role played by community mobilization and research, investigative reporting and legal action.

Our findings demonstrate there are competing understandings, visions and politics surrounding the use of ADS.

a table showing the factors influencing the decision to cancel and ADS system
There are a range of factors that influence decisions to cancel the uses of ADS systems. (Data Justice Lab), Author provided

Hopefully, our recommendations will lead to increased civic participation and improved oversight, accountability and harm prevention.

In the report, we point to widespread calls for governments to establish resourced ADS registers as a basic first step to greater transparency. Some countries such as the U.K., have stated plans to do so, while other countries like Canada have yet to move in this direction.

Our findings demonstrate that the use of ADS can lead to greater inequality and systemic injustice. This reinforces the need to be alert to how the use of ADS can create differential systems of advantage and disadvantage.

Accountability and transparency

ADS need to be developed with care and responsibility by meaningfully engaging with affected communities. There can be harmful consequences when government agencies do not engage the public in discussions about the appropriate use of ADS before implementation.

This engagement should include the option for community members to decide areas where they do not want ADS to be used. Examples of good government practice can include taking the time to ensure independent expert reviews and impact assessments that focus on equality and human rights are carried out.

a list of recommendations for governments using ADS systems
Governments can take several different approaches to implement ADS systems in a more accountable manner. (Data Justice Lab), Author provided

We recommend strengthening accountability for those wanting to implement ADS by requiring proof of accuracy, effectiveness and safety, as well as reviews of legality. At minimum, people should be able to find out if an ADS has used their data and, if necessary, have access to resources to challenge and redress wrong assessments.

There are a number of cases listed in our report where government agencies’ partnership with private companies to provide ADS services has presented problems. In one case, a government agency decided not to use a bail-setting system because the proprietary nature of the system meant that defendants and officials would not be able to understand why a decision was made, making an effective challenge impossible.

Government agencies need to have the resources and skills to thoroughly examine how they procure ADS systems.

A politics of care

All of these recommendations point to the importance of a politics of care. This requires those wanting to implement ADS to appreciate the complexities of people, communities and their rights.

Key questions need to be asked about how the uses of ADS lead to blind spots because of the way they increase the distancing between administrators and the people they are meant to serve through scoring and sorting systems that oversimplify, infer guilt, wrongly target and stereotype people through categorizations and quantifications.

Good practice, in terms of a politics of care, involves taking the time to carefully consider the potential impacts of ADS before implementation and being responsive to criticism, ensuring ongoing oversight and review, and seeking independent and community review.

Drawing parallels – the processing of data about children in education and social care

By Sarah Gorin, Ros Edwards and Val Gillies

During our research, we have been learning more about the ways that Government agencies such as health, social care and education collect, process and join up information about families. Schools, like other Government agencies collect and process an increasing volume of information about children. Data is collected for administrative purposes, such as: monitoring attendance, attainment, progress and performance; for safeguarding children; and to promote and support education and learning.

Information about children is not only captured by the school for their own and purposes determined by the Government, but also by private educational technology (EdTech) companies who gather data on children via their use of apps, that may be free to download, and recommended by teachers as promoting learning. These companies may sell on information for marketing or research purposes. Since the pandemic the use of EdTech has grown exponentially, meaning the data being gathered on children both through schools and by EdTech providers is greater still, raising the stakes in terms of the protection of children’s personal data.

A new report by The Digital Futures Commission (DFC) ‘Education Data Reality: The challenges for schools in managing children’s education data’ examines the views of professionals who work in or with schools on the procurement of, data protection for, or uses of digital technologies in schools. The report describes the range of EdTech used in schools and the complex issues that managing it presents.

In a blog about the report, the main author Sarah Turner highlights four key issues that constrain children’s best interests:

  • The benefits of EdTech and the data processed from children in schools are currently not discernible or in children’s best interests. Nor are they proportionate to the scope, scale and sensitivity of data currently processed from children in schools.
  • Schools have limited control or oversight over data processed from children through their uses of EdTech. The power imbalance between EdTech providers and schools is structured in the terms of the use they signed up to and exacerbated by external pressure to use some EdTech services.
  • There is a distinct lack of comprehensive guidance for schools on how to manage EdTech providers’ data practices. Nor is there a minimum standard for acceptable features, data practices and evidence-based benefits for schools to navigate the currently fragmented EdTech market and select appropriate EdTech that offers educational benefits proportionate to the data it processes.
  • Patchy access to and security of digital devices at school and home due to cost and resource barriers means that access to digital technologies to deliver and receive education remains inequitable.

The report is focused on the processing of education data about families, however there are many interesting parallels with the findings from our project on the way data about families is collected, processed and used by local authorities:

  • Firstly, there is a lack of evidence about the benefits of the use of digital technologies in both schools and in local authorities and a lack of understanding about the risks to children’s data privacy.
  • There is a lack of government guidance for schools as there is for local authorities about the digital technologies that they employ, meaning that organisations are left individually responsible for ensuring that they are compliant with General Data Protection Regulation (GPPR).
  • Schools, like local authorities are time, resource and expertise poor. Often neither have the data protection expertise to understand and consider the risks versus the benefits of data processing for children’s best interests.
  • There is a lack of transparency in how data is collected, handled and processed by Government agencies as well as third parties who gain access to data about families, either through children using their apps for educational purposes or through local authorities employing them for the development of predictive analytics systems.
  • Public awareness and understanding about how data is collected and processed and the risks of data sharing to children’s privacy are low and are not well understood by parents and children.

We welcome this new report by the Digital Futures Commission and hope that it stimulates more discussion and awareness amongst professionals and families.

Children’s visibility, vulnerability and voice in official statistics and their use

By Sarah Gorin, Ros Edwards and Val Gillies

Throughout our project we have been looking at parental social licence for the linking together of Government data about families’ lives across areas such as health, education and social care. Whilst our research focus has been on parents, it is also important we listen to children’s views. A vast amount of data is collected about children across Government and non-Government agencies, yet it would seem children and young people are rarely asked what they consider to be acceptable uses of their personal information. It is important that children are given this opportunity, under Article 12 of the UN Convention on the Rights of the Child, that requires that children’s views should be heard and considered on all matters that affect them.

 A recent report ‘Visibility, Vulnerability and Voice’ by The Office for Statistics Regulation (an independent body that regulates the use of official statistics) has drawn attention to the importance of including children and young people in official statistics.

The report provides a framework for considering the needs of children and young people in the development of official statistics that they have named the ‘3V’s’ framework and suggests seeing statistics about children and young people with 3 lenses: that of ‘Visibility’, making statistics on children and young people available; ‘Vulnerability’, ensuring collection and analysis of data about children who are vulnerable to poorer outcomes and ‘Voice’, ensuring statistics reflect the views of children and young people and they are given a voice in how their data is used.

In considering children’s ‘Voice’ the Office for Statistics Regulation reflect that all official statistics producers should:

  • Seek the views of children and young people themselves rather than relying on proxies from adults.
  • Consider, and respond to, the data needs of children and young people.
  • Involve children and young people in the development of statistics for and about them.
  • Ensure children and young people have a voice around how their data are used in official statistics and in research using the data underpinning them.

Whilst the report focuses on the need to involve children and young people in the development of official statistics, the same also applies more broadly to the development of policy around the use of data. A report by DigitalDefendMe,‘The Words We Use in Data Policy’ considers the way children are framed in data policy and the lack of representation or engagement with children about their views. We welcome these reports and the focus and commitment to improving opportunities for children and young people to be involved in developments in the way their data is linked together and used.

Voices come together to highlight the need to strengthen children’s data rights in response to Government consultation ‘Data: a new direction’

By Sarah Gorin, Ros Edwards and Val Gillies

As part of our research project, we are constantly trying to keep up with developments about the use of children’s data by Government. In Autumn 2021 the U.K Government released a consultation ‘Data: a new direction’ that we responded to, based on our learning from parents who have participated in our research.

Our submission highlighted the already fragile nature of public trust in joining up administrative data about families, particularly amongst marginalised groups and the need to press pause on data sharing and artificial intelligence (AI) systems in the U.K. with a need for greater regulation and safeguarding of families’ data from use by private companies.

Fortunately we are not alone in raising concerns about the proposals

.

Organisations such as the 5Rights Foundation and DefendDigitalMe have also made submissions to the consultation that highlight the lack of attention to the impact of these proposals on children and their families.

In this blog, we summarise some of the key recommendations to the consultation by LSE’s Professor Sonia Livingstone and 5Rights Foundation researcher Dr Kruakae Pothong who have drawn on their extensive research experience to inform their children’s rights based response (for more detail see their LSE blog).

Professor Livingstone and Dr. Pothong’s submission highlights the need for greater not lesser support for children’s rights as data subjects and questions whether proposed changes to the UK General Data Protection Regulation are indeed in line with international human rights and child rights developments.

Key concerns raised in the submission include:

  • The need to maintain a clear boundary between the use of personal data for scientific research and the re-use of this scientific research for commercial purposes, with meaningful consent obtained for any further processing to occur.
  • The importance of scientific and public interest research that identifies children to demonstrate fully how the acquisition and processing of data would affect children and how they can exercise their data rights.
  • The importance of not proceeding with the Government’s proposed “limited, exhaustive list of legitimate interests” as a basis for processing data. Currently, legitimate interests can be a lawful basis for processing data only when processing is ‘necessary’ and when there is a balance between the interests of the data subjects and others’ interests.
  • The need to maintain rather than remove (as proposed in the Consultation) the requirement for organisations to undertake data protection impact assessments. They argue that instead the additional safeguards of the data protection impact assessment and a child rights impact assessment should be mandated for use before and after the processing of children’s personal data.
  • The importance of retaining the requirement to have Data Protection Officers in all circumstances, especially educational contexts.
  • Maintaining free subject access requests as charging families would adversely affect children’s rights and make it harder for children and families to correct inaccurate information held about them, potentially negatively affecting them in the short and long term.

The Government consultation closed in November 2021 and we await the outcome.

Generating transparency where none exists: just how are data analytics used in children’s services?

By Val Gillies, Ros Edwards and Sarah Gorin 

The Government’s public consultation on changes to the data protection framework emphasise the importance of public trust and transparency. But when, as part of our research, we tried to establish basic facts about the extent to which local authorities are linking and analysing data on children and families, we hit a brick wall.

Our research is aiming to provide a clearer picture of what parents think about the ways information about them and their children may be linked together and used by local councils. An important part of this work has been speaking directly to parents to see how much support for and trust in this type of activity there is. Alongside these valuable and enlightening conversations, we have also been trying to map the state of play among British local authorities and to find out exactly which authorities are doing what when it comes to operational data linking and matching and the application of predictive analytics to families’ data. 

The Government’s declared commitment is to roll out a ‘world class nationwide digital infrastructure and  ‘unlock the power of data’, but there is currently no central record available of which authorities are doing what. 

Freedom of information requests

To try to find out, we submitted Freedom of Information requests to 220 UK Local Authorities in the UK.  The 149 English councils participating in the ‘Troubled Families Programme’ (now called the Supporting Families Programme) must, by necessity link and analyse datasets to ‘identify’ ‘troubled’ households and claim payment-by-results from central government. Yet only 76 responded that they used data analytics. The remainder claimed that their systems did not meet our definition or responded with a straight ‘no’ to all our questions about their use. 

English councils claiming to be outside our criteria responded in a vague and evasive way. For example, some responded ‘no’ when asked about their engagement with data analytics either by positioning family work as separate from children’s services or by using the term ‘data matching’ instead. Further investigation established that many of these councils do in fact use systems with predictive analytic capacity. 

For example, Achieving for Children, a social enterprise providing services for children in several local authorities, responded to our FoI that beyond ‘some basic data monitoring
.we do not currently use nor have we previously used any data analytics, predictive analytics and or artificial intelligence systems to assist with this work’. Yet they have used business intelligence technologies on a range of projects using predictive analytics/algorithms, as noted on the UK Authority Digital Data and Technology for the Public Good website.

Side-stepping terms

Our FoI research also showed that Councils side-stepped the term algorithm and the concept of AI. Even where they engaged in predictive analytics they denied they were using algorithms – it’s hard to envisage one without the other given the tools they were employing. 

We received a lot of incomplete and insufficient information and information that was irrelevant. A number of councils claimed exemption from the FoI on cost grounds or commercial confidentiality. Where we followed up with more carefully worded requests, we received ambiguously worded replies. 

Some local authorities were more forthcoming and open, listing various tools and companies used to conduct their data analytics.  Microsoft Business Intelligence the most common tool cited. Dorset County Council has a case study on the Local Government Association website of how the tool can be used to ‘ enable local professionals to identify potential difficulties for individual children before they become serious problems’. Our FoI established the council plans to make greater use of AI in the future. 

Our analysis of the responses we received and the information we have sourced from elsewhere, points to a clear shift in service priorities away from early intervention for parental education towards child protection and crime prevention. Earlier focus on linking parenting skills to social mobility is now muted, with rationales for data innovation focusing almost exclusively on the pre-emption of problems rather than on the maximisation of children’s future potential. 

Our findings around children’s services have been reinforced by work by Big Brother Watch which has published a detailed analysis of the use of hidden algorithms by councils who use trackers to identify disadvantaged households in order to target them for interventions. The organisation found one of the biggest players in this area, Xantura, to be particularly secretive. 

Public trust 

A wide range of data ‘solutions’ are drawn on by local authorities to classify, flag, target and intervene in disadvantaged families and their children. Yet parents are not generally informed of how their information is being used and for what purpose. As we have shown it is difficult even for us as researchers to establish.

From our work here, it is hard to see how public trust and transparency will be achieved from the opaque, ambiguous and even evasive base that our FoI request endeavours revealed. 

Freedom of Information Requests on the Use of Data Analytics in Children’s Services: Generating Transparency is research by Val Gillies and Bea Gardner, with Ros Edwards and Sarah Gorin.