Using Artificial Intelligence in public services – does it breach peopleā€™s privacy?

By Ros Edwards, Sarah Gorin and Val Gillies

As part of our research, we recently asked parents what they thought about the use of data linkage and predictive analytics to identify families to target public services.

They told us that they didnā€™t trust these processes. This was particularly the case among marginalised social groups. In other words, the groups of parents who are most likely to be the focus of these AI identification practices are least likely to see them as legitimate. Now a new report by the United Nations High Commissioner of Human Rights, Michelle Bachelet highlights major concerns about the impact of artificial intelligence, including profiling, automated decision-making and machine-learning, upon individualsā€™ right to privacy. 

The report makes a number of recommendations, including a moratorium on the use of AI systems that pose a serious risk to human rights and the banning of social scoring of individuals by Governments or AI systems that categorise individuals into groups on discriminatory grounds.

The right to privacy in the digital age: report (2021) builds on two previous reports by the High Commissioner looking at the right to privacy in the digital age and incorporates views of international experts at a virtual seminar, as well as responses to the High Commissioners call for input into the report from member states, including the U.K.

It examines the impact of digital systems such as artificial intelligence in four sectors, including public services. Artificial intelligence is used in public services such as social care, health, police, social security and education in a range of ways, such as decision-making about welfare benefits and flagging families for visits by childrenā€™s social care services.

Concerns are expressed about the linking together for example of large health, education and social care data sets with other data held by private companies, such as social media companies or data brokers who, the report says, may gather information outside protective legal frameworks. The involvement of private companies in the construction, development and management of public sector data systems, also means they can gain access to data sets containing information about large parts of the population.

There are additional concerns about the potential inaccuracy of  historic data and the implications of that for future decision-making. The report states that these systems unequally ā€œexpose, survey and punish welfare beneficiariesā€ and that conditions are imposed on individuals that can undermine their autonomy and choice.

A digital welfare fraud detection system was banned by a court in the Netherlands, ruling that it infringed individualsā€™ right to privacy. The system provided central and local authorities with the power to share and analyse data that were previously kept separately, including on employment, housing, education, benefits and health insurance, as well as other forms of identifiable data. The tool targeted low-income and minority neighbourhoods, leading to de facto discrimination based on socioeconomic background.

The recommendations in the report include:

  • using a human rights based approach
  • ensuring legislation and regulation are in line with the risk to human rights, with sectors including social protection to be prioritised
  • development of sector specific regulation requirements
  • drastic improvements to efforts regarding transparency, including use of registers for AI that contain key information about AI tools and their use, informing affected individuals when decisions are being or have been made automatically or with the help of automation tools, and notifying individuals when the personal data they provide will become part of a data set used by an AI system.

With concerns about the risks to the human rights of individuals and families about the use of data linkage and predictive analytics, it is vital to pay heed to the UN High Commissionerā€™s call for a moratorium. Public authorities need to pay meaningful attention to the lack of social legitimacy for AI, as evidenced in our research, and to ask themselves if the risk of further distrust and disengagement from already marginalised social groups, and consequences for a cohesive and equal society, is worth it. 

The role of public services in addressing child vulnerability

Our response to the House of Lords Public Services Committee call for evidence on the role of public services in addressing child vulnerability.

The House of Lords Public inquiry, ā€˜The role of public services in addressing child vulnerabilityā€™ asked whether reforming public services can address the growing problem of child vulnerability.

The inquiry covers how public services support mothers and families during pregnancy, and how they support children in their early years and school years.

One of the premises of the Committeeā€™s call for evidence is that public services should share data as part of their duty to keep children safe.

Our project team provided evidence that the Committee needs to consider the wider implications and interests of operational data sharing and data linkage for early intervention (which also can be taken as lessons from Covid-19), in particular:

  • The importance of transparency and informed consent to use of their administrative records
  • The wider social legitimacy of and trust in institutions, especially for marginalised social

Question marks over data analytics for family intervention

by Ros Edwards, Sarah Gorin and Val Gillies

The National Data Strategy encourages the UKā€™s central and local government to team up with the private sector to digitally share and join up records to inform and improve services. One example of this is the area of troublesome families, where itā€™s thought that the use of merged records and algorithms can help spot or pre-empt issues by intervening early. But there are questions over this approach and this is something our project has been looking into. In our first published journal article, we have been examining the rationales presented by the parties behind data analytics used in this context to see if they really do present solutions. Ā 

The application of algorithmic tools is a form of technological solution; based on indicators in the routinely collected data, in an effort to draw out profiles, patterns and predictions that enable services to target and fix troublesome families.  But local authorities often need to turn to commercial data analytic companies to build the required digital systems and algorithms.

In our paper we analysed national and local government reports and statements, and the websites of data analytic companies, addressing data linkage and analytics in the family intervention field.  We looked in particular at rationales for and against data integration and analytics.  We use a ā€˜problem-solvingā€™ analytic approach, which focuses on how issues are produced as particular sorts of problems that demand certain sorts of solutions to fix them.  This helps us to identify a double-faceted chain of problems and solutions.  

Seeking and targeting families

Families in need of intervention and costing public money are identified as a social problem and local authorities given the responsibility of fixing that problem. Local authorities need to seek out and target these families for intervention. And it is experts in data analytics that, in turn, will solve that identification problem for them.  In turn companies are reliant on citizens being turned into data (datafied) by local authorities and other public services.

We identified three main sorts of rationales in the data analytic companies promotion of their products that will solve local authoritiesā€™ problems: the power of superior knowledge, harnessing time, and economic efficiency.

Companies promote their automated data analytics products as powerful and transformational.  They hand control of superior, objective and accurate, knowledge to local authorities so that they can use profiling criteria to identify families where there are hidden risks, for intervention.  And their systems help local authority services such as social care and education collaborate with other services like health and the police, through data sharing and integration.

Data analytics is presented as harnessing time in the service of local authorities as an early warning system that enables them quickly to identify families as problems arise.  It is the provision of an holistic view based on existing past records that local authorities hold about families, and the inputting of ā€˜real timeā€™ present administrative data on families as it comes in.  In turn, this provides foresight, helping local authorities into the future ā€“ predicting which families are likely to become risks in advance and acting to pre-empt this, planning ahead using accurate information.  

Another key selling point for data analytics companies is that their products allow economic efficiency.  Local authorities will know how much families cost them, and can make assured decisions about where to put or withdraw resources of finances and staffing.  Data analytic products produce data trails that cater for local authorities to prepare Government returns and respond to future central Government payment-by-results initiatives, maximising the income that can be secured for their constrained budgets.

Questions to be asked

But there are questions to be asked about whether or not data linkage and analytics does provide powerful and efficient solutions, which we consider in our article.  Concerns have been raised about the errors and bias in administrative records, resulting in unfair targeting of certain families. 

Particular groups of parents and families are disproportionately represented in social security, social care and criminal justice systems, leading to existing social divisions of class, race and gender built into the data sets.  For example, there is evidence that racial and gender profiling discriminations are built into the data, such as the inclusion of young Black men who have never been in trouble in the Metropolitan Police Gangs Matrix.  And automated modelling equates socio-economic disadvantage with risk of child maltreatment, meaning that families are more likely to be identified for early intervention just because they are poor.  On top of that, studies drawing on longitudinal data are showing that the success rates of predictive systems are worryingly low. 

All of which raise a more fundamental question of whether or not algorithms should be built and implemented for services that intervene in familiesā€™ lives.  In the next stage of our research, we will be asking parents about their views on this and on the way that information about families is collected and used by policy-makers and service providers.  

Problem-solving for Problem-solving: Data Analytics to Identify Families for Service Intervention

Presentation British Sociological Association Annual Conference 2021

Project PI Ros Edwards presented findings from our project at the British Sociological Association annual conference 2021.

The paper, Problem-solving for Problem-solving: Data Analytics to Identify Families for Service Intervention looks at the way that the promise of technological fixes in the family policy field has set up a set of dependencies between public services and data analytic companies, entrenching a focus on individual families as the source of social problems rather than social conditions.

Watch Ros’ presentation.

The paper that was the basis of Ros’s presentation has now been published in Critical Social Policy