Generating transparency where none exists: just how are data analytics used in children’s services?

By Val Gillies, Ros Edwards and Sarah Gorin 

The Government’s public consultation on changes to the data protection framework emphasise the importance of public trust and transparency. But when, as part of our research, we tried to establish basic facts about the extent to which local authorities are linking and analysing data on children and families, we hit a brick wall.

Our research is aiming to provide a clearer picture of what parents think about the ways information about them and their children may be linked together and used by local councils. An important part of this work has been speaking directly to parents to see how much support for and trust in this type of activity there is. Alongside these valuable and enlightening conversations, we have also been trying to map the state of play among British local authorities and to find out exactly which authorities are doing what when it comes to operational data linking and matching and the application of predictive analytics to families’ data. 

The Government’s declared commitment is to roll out a ‘world class nationwide digital infrastructure and  ‘unlock the power of data’, but there is currently no central record available of which authorities are doing what. 

Freedom of information requests

To try to find out, we submitted Freedom of Information requests to 220 UK Local Authorities in the UK.  The 149 English councils participating in the ‘Troubled Families Programme’ (now called the Supporting Families Programme) must, by necessity link and analyse datasets to ‘identify’ ‘troubled’ households and claim payment-by-results from central government. Yet only 76 responded that they used data analytics. The remainder claimed that their systems did not meet our definition or responded with a straight ‘no’ to all our questions about their use. 

English councils claiming to be outside our criteria responded in a vague and evasive way. For example, some responded ‘no’ when asked about their engagement with data analytics either by positioning family work as separate from children’s services or by using the term ‘data matching’ instead. Further investigation established that many of these councils do in fact use systems with predictive analytic capacity. 

For example, Achieving for Children, a social enterprise providing services for children in several local authorities, responded to our FoI that beyond ‘some basic data monitoring….we do not currently use nor have we previously used any data analytics, predictive analytics and or artificial intelligence systems to assist with this work’. Yet they have used business intelligence technologies on a range of projects using predictive analytics/algorithms, as noted on the UK Authority Digital Data and Technology for the Public Good website.

Side-stepping terms

Our FoI research also showed that Councils side-stepped the term algorithm and the concept of AI. Even where they engaged in predictive analytics they denied they were using algorithms – it’s hard to envisage one without the other given the tools they were employing. 

We received a lot of incomplete and insufficient information and information that was irrelevant. A number of councils claimed exemption from the FoI on cost grounds or commercial confidentiality. Where we followed up with more carefully worded requests, we received ambiguously worded replies. 

Some local authorities were more forthcoming and open, listing various tools and companies used to conduct their data analytics.  Microsoft Business Intelligence the most common tool cited. Dorset County Council has a case study on the Local Government Association website of how the tool can be used to ‘ enable local professionals to identify potential difficulties for individual children before they become serious problems’. Our FoI established the council plans to make greater use of AI in the future. 

Our analysis of the responses we received and the information we have sourced from elsewhere, points to a clear shift in service priorities away from early intervention for parental education towards child protection and crime prevention. Earlier focus on linking parenting skills to social mobility is now muted, with rationales for data innovation focusing almost exclusively on the pre-emption of problems rather than on the maximisation of children’s future potential. 

Our findings around children’s services have been reinforced by work by Big Brother Watch which has published a detailed analysis of the use of hidden algorithms by councils who use trackers to identify disadvantaged households in order to target them for interventions. The organisation found one of the biggest players in this area, Xantura, to be particularly secretive. 

Public trust 

A wide range of data ‘solutions’ are drawn on by local authorities to classify, flag, target and intervene in disadvantaged families and their children. Yet parents are not generally informed of how their information is being used and for what purpose. As we have shown it is difficult even for us as researchers to establish.

From our work here, it is hard to see how public trust and transparency will be achieved from the opaque, ambiguous and even evasive base that our FoI request endeavours revealed. 

Freedom of Information Requests on the Use of Data Analytics in Childrens Services: Generating Transparency is research by Val Gillies and Bea Gardner, with Ros Edwards and Sarah Gorin. 

Using Artificial Intelligence in public services – does it breach people’s privacy?

By Ros Edwards, Sarah Gorin and Val Gillies

As part of our research, we recently asked parents what they thought about the use of data linkage and predictive analytics to identify families to target public services.

They told us that they didn’t trust these processes. This was particularly the case among marginalised social groups. In other words, the groups of parents who are most likely to be the focus of these AI identification practices are least likely to see them as legitimate. Now a new report by the United Nations High Commissioner of Human Rights, Michelle Bachelet highlights major concerns about the impact of artificial intelligence, including profiling, automated decision-making and machine-learning, upon individuals’ right to privacy. 

The report makes a number of recommendations, including a moratorium on the use of AI systems that pose a serious risk to human rights and the banning of social scoring of individuals by Governments or AI systems that categorise individuals into groups on discriminatory grounds.

The right to privacy in the digital age: report (2021) builds on two previous reports by the High Commissioner looking at the right to privacy in the digital age and incorporates views of international experts at a virtual seminar, as well as responses to the High Commissioners call for input into the report from member states, including the U.K.

It examines the impact of digital systems such as artificial intelligence in four sectors, including public services. Artificial intelligence is used in public services such as social care, health, police, social security and education in a range of ways, such as decision-making about welfare benefits and flagging families for visits by children’s social care services.

Concerns are expressed about the linking together for example of large health, education and social care data sets with other data held by private companies, such as social media companies or data brokers who, the report says, may gather information outside protective legal frameworks. The involvement of private companies in the construction, development and management of public sector data systems, also means they can gain access to data sets containing information about large parts of the population.

There are additional concerns about the potential inaccuracy of  historic data and the implications of that for future decision-making. The report states that these systems unequally “expose, survey and punish welfare beneficiaries” and that conditions are imposed on individuals that can undermine their autonomy and choice.

A digital welfare fraud detection system was banned by a court in the Netherlands, ruling that it infringed individuals’ right to privacy. The system provided central and local authorities with the power to share and analyse data that were previously kept separately, including on employment, housing, education, benefits and health insurance, as well as other forms of identifiable data. The tool targeted low-income and minority neighbourhoods, leading to de facto discrimination based on socioeconomic background.

The recommendations in the report include:

  • using a human rights based approach
  • ensuring legislation and regulation are in line with the risk to human rights, with sectors including social protection to be prioritised
  • development of sector specific regulation requirements
  • drastic improvements to efforts regarding transparency, including use of registers for AI that contain key information about AI tools and their use, informing affected individuals when decisions are being or have been made automatically or with the help of automation tools, and notifying individuals when the personal data they provide will become part of a data set used by an AI system.

With concerns about the risks to the human rights of individuals and families about the use of data linkage and predictive analytics, it is vital to pay heed to the UN High Commissioner’s call for a moratorium. Public authorities need to pay meaningful attention to the lack of social legitimacy for AI, as evidenced in our research, and to ask themselves if the risk of further distrust and disengagement from already marginalised social groups, and consequences for a cohesive and equal society, is worth it. 

What do parents think about linking information for family interventions? What we know to date.

Our project team look at responses to a pilot survey of parents undertaken ahead of their new research.

National and local government departments and services collect and hold a range of information about parents and their children. These records include details of the taxes people pay, their medical records, school data and police files. But what do parents think about the idea of the data held on their children being linked with theirs, so that government services can identify families for possible interventions? A pilot survey leading up to a new research project indicates they’re not very keen on the idea. 

Data linkage is a hotly-debated topic with arguments about better targeted, more focused public services often counter-acted with concerns around privacy and lack of trust in the organisations collecting and using the data. 

When it comes to the use of linked data to identify families where it’s thought there may be heightened risk of things such as child abuse or neglect, or truancy and anti-social behaviour, there can be few more contentious areas. But real evidence on what parents think is thin on the ground and this is where our research project hopes to make some inroads and get a better handle on things. 

Pilot survey

A voluntary pilot survey of parents carried out via the Mumsnet website and Twitter received a total of 365 responses – mostly from white mothers from relatively well-off households. We might assume, this group might feel more secure about the linkage and use of records on them and their children and that they might generally be trusting of organisations such as government departments and local councils etc. So what did we find?

Only half the parents said they had heard about data linkage and how it worked. Less than half thought it acceptable to use it to improve the planning and delivery of family support services. Far fewer (around 15 percent) thought it should be used to identify specific families who might need intervention but hadn’t asked for support or to save public money by preventing or catching family problems early. 

Their concerns included: families’ right to privacy; increasing stigma; oversimplification of risk factors; discouraging families from seeking help and problems with data accuracy and safety. About 1 in 5 did agree that the more we know about families the more the nation’s wellbeing can be improved.

Question of trust

There was little trust in organisations who might link data for the above reasons, but specific distrust of private companies with only 4 parents saying they would definitely trust them and 309 saying they wouldn’t. One participant said:

Having algorithms produced by private companies, who are not transparent, making dubious links, which do correlate with truth, is a very dangerous tool. It will not benefit society but will only benefit private companies.” 

Parents were almost entirely against government accessing financial details such as bank details, credit cards, supermarket records, CCTV and social media posts as the following comments from participants show:

More detailed and personal information, PARTICULARLY medical records/social services reports/DLA info, should not be ‘joined up’ as this breaches personal privacy.”  

“It might be acceptable to link to credit records and food shopping records on an anonymous basis to produce aggregate information or statistics for research purposes e.g. a study looking at the possible relationships between children’s diets and school attainment, or the impact of debt on parents BUT this sort of data analysis should NEVER identify individuals.”

“Government surveillance of families, without knowledge or consent, is an extremely questionable approach. Government should focus on supporting families, providing public services, creating jobs, ensuring quality of housing, eliminating poverty and increasing community capacities of resilience, care, safety, cohesion and fun.”

Lack of acceptance

So among this group of parents, where we might have expected greater levels of support for the idea of linking data for the purposes of identifying and targeting service intervention, we find in fact a substantial lack of acceptance and trust around the idea of family data linkage.  

A web-based survey of this nature gives us some indication of what people are thinking, but it’s important to try to look at this more carefully. As part of our new research project, we’re commissioning a robustly-designed and -conducted survey of around 900 parents and carrying out a range of in-depth interviews with parents in families most likely to be the target of this type of data linkage and policies associated with it.