How do our EdTech Certification criteria emerge from our work at the Digital Futures Commission?

By Ayça Atabey, Sonia Livingstone & Kruakae Pothong

After three years of intensive research on the governance of education data at the Digital Futures Commission, we developed a “Blueprint for Education Data: Realising children’s best interests in digitised education”, a practical framework to tackle EdTech companies violating children’s privacy while learning at school.

The Blueprint sets out 10 certification criteria for all EdTech used in schools for teaching, learning, administration and safeguarding. Here we explain the 10 criteria and how they are grounded in our research.

1. Full compliance with the UK Age Appropriate Design Code (AADC)

To identify what a child rights-respecting future for EdTech looks like, we researched EdTech companies’ terms of use and privacy policies and their contracts with schools, and interviewed practitioners to explore the gap between what the law says and what happens in practice. We identified a host of problems regarding compliance with the data protection law (e.g., when relying on ‘legitimate interests’). In our Governance of data for children’s learning in UK state schools report, we called for stakeholders to address these problems, including ensuring full compliance with the UK AADC. We advocated and explained why this is needed in A socio-legal analysis of the UK governance regimes for schools and EdTech among other outputs. Drawing on the expertise of colleagues, we made four proposals to the ICO for developing their framework on children’s best interests and have advocated to them that the AADC’s standards should be applied to all EdTech.

2. Compliance with privacy and security standards, proportionate to the risks of the data processing, and with the UK government’s accessibility requirements

Compliance with the data protection laws isn’t sufficient in itself. EdTech companies must comply with other frameworks (e.g.; cybersecurity, equality laws and accessibility requirements), to ensure their data activities are in children’s best interests. As we know,  privacy matters when enabling a safer online experience for children and the compliance gaps with privacy and security standards that we highlighted in several DFC reports and blogs must be addressed. Accordingly, we trust certification criteria should include compliance with relevant legislation, regulations for data protection, privacy and security, and good practices of risk–benefit calculation.

3. Automatic application and extension of high privacy protection by EdTech to any resources used or accessed as part of a user’s digital learning journey by default and design

EdTech companies can better address the best interests of children through data protection by design and by default. Today, not all resources or connected services used provide the same level of protection. To ensure high privacy protection for children’s lives, EdTech providers must provide high privacy protection within their own product or service environment and extend the same privacy protection to users’ interaction with other products or services accessed through the providers’ environment. In this way, children can enjoy consistently high privacy protection throughout their digital learning journey irrespective of the varying privacy protection offered by these other products and services. This privacy extension can be achieved, for example, by creating isolated ‘user space’ environments which act like containers and apply providers’ own privacy policies to these environments.

4. Biometric data is sensitive personal data and must not be processed unless one of the exceptions in the law applies. Children’s biometric data and AI-driven technologies are heavily used in educational settings, and the stakes for children are high. As such, children, parents and caregivers must be explicitly notified of the processing of biometric data and given opportunities to provide informed consent. Children and parents must also be able to object to the processing and withdraw the consent given at any time. This is particularly important given that EdTech use can involve pervasive biometric data processing practices which raise legal and ethical questions.

5. Meaningful distinction between factual personal data and inferred or behavioural judgements about children: Maintain a separation between these types of data and do not automate linkages, construct profiles or conduct learning analytics in ways that cannot be disaggregated. Where data are inferred, a clear and transparent account of how the analysis is constructed should be available to the certification body and schools to ensure that behavioural or educational inferences are meaningful and contestable and that transparency rules are respected aligned with their best interests when complying with the data protection laws. Ensuring meaningful distinction is critical given the complexities around connected data for connected services, and how crucial judgments about children can affect their lives and access to services.

6. Opportunities to review and correct errors in the data held about children: Proactively provide prominent, child-friendly and accessible tools for children, parents and caregivers to understand what data is held about the child, enable children and caregivers to review and correct any errors in education records about the child, and provide redress if the errors result in harm. Transparency is key here, because children and caregivers can review any errors if they are informed about what data is held about them. EdTech companies have a responsibility to ensure that they communicate clearly and effectively so that the information can be easily understood and acted upon. Yet, our nationally representative survey showed that less than 1/3 of children reported that their school had told them why it uses EdTech, and fewer had been told what happens to their data or about their data subject rights.

7. Vulnerability disclosure: Provide prominent and accessible pathways for security researchers and others to report any security vulnerabilities of the tools and establish an internal process to promptly act on the reported vulnerabilities. Considering increasing use of EdTech in schools, children’s education data is entering the global data ecosystem, and data risks attached to these, addressing vulnerability disclosure standards becomes even more urgent than ever. However, currently there is lack of resources available for those who deal with data protection and security, and provision of prominent and accessible pathways to report and act on security vulnerabilities is needed.

Image by Zinkevych from Freepik

8. Evidence-based educational benefits: Provide up-to-date peer-reviewed evidence of the benefits of EdTech products, using robust methodologies produced by independent experts free from any conflict of interest. Our expert roundtable report highlighted the lack of evidence about educational benefits. The experts stated that there is a lack of common definitions, evidence or benchmarks for what benefits children in education. Our Education data reality report showed that schools and teachers have similar concerns. Our nationally representative survey results show that children have mixed views about the EdTech products they use in school, and some children doubt the benefits.

9. In-product research: Education data used for R&D by the EdTech provider must meet high ethical and child rights standards. It should not be routine or conducted on children’s education data without meaningful informed consent. Our Google Classroom and ClassDojo report shows there is a failure to comply with data protection regulation and this leaves school children vulnerable to commercial exploitation, in many contexts, including in-product research. EdTech providers should respect children’s rights and give them control over how their data is used. Fair treatment requires addressing the expectations and needs of children when communicating any information to them for consent. Data-driven education must be responsible, rights respecting, and lawful. In-product research is no exception.

10. Linked services: Ensure service linkages and plug-ins such as in-app purchases that are accessible in EdTech products or services, meet these standards. This criterion requires EdTech providers to ensure that the linked services they choose to offer are compliant with the above criteria by design while the privacy extension in the third criterion addresses privacy protection at the interface level.

Our reports, blogs, and research diagnosed a series of problems with education data governance, which make life difficult for schools and create regulatory uncertainty for businesses and undermine children’s best interests. We believe our certification criteria will help to unlock the value of education data in children’s interests and the public interest and ensures that children’s data aren’t exploited for commercial interests in EdTech ecosystem. With children’s best interests in mind, we make a clear call to the Department for Education to provide accreditation requirements for EdTech. This should provide clear guidance on the standards to be met and a strong mechanism for implementing this, reducing the burden on schools, creating a level playing field in the industry, and delivering children’s rights to privacy, safety and education in a digital world.  

Ayça Atabey is a lawyer and a researcher, currently enrolled as a PhD candidate at Edinburgh University. She has an LLM (IT Law) degree from Istanbul Bilgi University and an LLB (Law) degree from Durham University. Her PhD research focuses on the role that the notion of ‘fairness’ plays in designing privacy-respecting technologies for vulnerable data subjects. Her work particularly involves the intersection between data protection, AI, and human rights issues. She is a research assistant for the Digital Futures Commission at 5Rights Foundation. Prior to this, she worked as a lawyer in an international law firm and has been working as a researcher at the BILGI IT Law Institute. Ayça also works as a consultant focusing on data protection, human rights, and migration at UN Women.

Sonia Livingstone DPhil (Oxon), FBA, FBPS, FAcSS, FRSA, OBE is a professor in the Department of Media and Communications at the London School of Economics and Political Science. She is the author of 20 books on children’s online opportunities and risks, including “The Class: Living and Learning in the Digital Age”. Sonia has advised the UK government, European Commission, European Parliament, Council of Europe and other national and international organisations on children’s rights, risks and safety in the digital age.

Dr Kruakae Pothong is a Researcher at 5Rights and visiting research fellow in the Department of Media and Communications at London School of Economics and Political Science. Her research spans the areas of human-computer interaction, digital ethics, data protection, Internet and other related policies. She specialises in designing social-technical research, using deliberative methods to elicit human values and expectations of technological advances, such as the Internet of Things (IoT) and distributed ledgers.

Reposted from: https://digitalfuturescommission.org.uk/blog/how-do-our-edtech-certification-criteria-emerge-from-our-work-at-the-digital-futures-commission/

Applying a Democratic Brake to the Hegemony of Efficiency – a lesson from cultural heritage

By Nicola Horsley

In our recently published book The Trouble with Big Data, Jennifer Edmond, Jörg Lehmann, Mike Priddy and I draw on our findings from the Knowledge Complexity (KPLEX) project to examine how the inductive imperative of big data applied in the sphere of business and elsewhere crosses over to the cultural realm.  The book details how cultural heritage practitioners’ deep understanding of the material they work with and the potential for its use and misuse when linked with other data is being displaced.

It is often remarked that public debate, critical thought and legal regulation simply cannot keep up with the pace of technological change, resulting in the adoption of technologies for purposes that stray beyond ethical guidelines. Thirty years ago, Neil Postman drew on the work of Frederick W. Taylor to describe the principles of Technopoly, which revolves around the primacy of efficiency as the goal of human endeavour and thought, and prizes measurement and machine calculation over human judgement, which was seen as flawed and unnecessarily complex. In order for Technopoly to take hold, a knowledge landscape in which data were divorced from context and collection purpose, disconnected from theory or meaning and travelling in no particular direction, needed to materialise.

In the KPLEX project, we were interested to learn how cultural heritage practitioners’ expertise was marginalised by the offer of a standardised interface through which the knowledge seeker could find data that satisficed as an answer to their research question, bypassing any contextual information engaging with a human expert might offer. The standardisation of interfaces between knowledge-seekers and myriad knowledge institutions has obfuscated huge differences in the organisation, values and practices of those institutions. Many elements of archival practitioners’ work go unsung, but the drive to furnish users with detailed information about collections without having to ask for it suggests that dialogic exchange with those who are experts on the source material was an unnecessary barrier that has been removed.

While greater anonymity can promise to tackle problems of bias and prejudice, the reality is that an overwhelming amount of data that does not become formally recorded as metadata continues to be stored as tacit knowledge in cultural heritage practitioners themselves. Within the KPLEX interviews, one head of a service team at a national library described how digitisation presented the mammoth challenge of impressing the importance of context upon a user who has landed on a page without an understanding of how what they are viewing relates to the institution’s collections as a whole (never mind the collections of other institutions). It was felt that the library’s traditional visitors forged an awareness of the number of boxes of stuff that related to their query versus the number they had actually got to grips with, whereas today’s user, presented with a satisficing Google lookalike result, has her curiosity curtailed.

The introduction of new data systems to cultural heritage institutions usually involves archival practitioners working with data engineers to build systems that reorganise and reconstitute holdings and metadata to facilitate digital sensemaking techniques – with the burden usually on archival practitioners’ upskilling to understand computational thinking. Unintended consequences in general, and unreasonable proxies and imperfect, satisficing answers in particular, are at the heart of cultural knowledge practitioners’ reservations about datafication that should not be glossed over as resistance to change in their practice or professional status. Underlying these concerns is a perception familiar to readers of Latour and Callon’s work How to Follow Scientists and Engineers, which observed how differences in practice were translated into technical problems that engineers could then apply technological ‘solutions’ to – a phenomenon sometimes referred to as ‘techno-solutionism’. The result for the user searching the collections of a library, museum, gallery or archive is a slick interface that feels familiar as it appears to function in the same way as the search engines she uses for a range of other purposes throughout her day.

The ‘quick wins’ of Google’s immediacy and familiarity are a constant thorn in the side of practitioners concerned with upholding rigour in research methods, and there is a real fear that the celebration of openness is working as a diversion away from both the complex material excluded from it and any awareness that this phenomenon of hiddenness through eclipsing ‘openness’ is happening. It is clear that the new normal of the Google paradigm is having a direct effect on how knowledge seekers understand how to ask for knowledge, what timeframe and format of information is appropriate and desirable, and what constitutes a final result. Callon and Latour’s description of a black box seems more pertinent than ever. What is more, the coming together of the paradigms of the archival method and the computational method is viewed as imperilling archivists’ fundamental values if the result is modelled on the algorithms of Google and Facebook, as described at a national library:

"Even though people believe they see everything, they might see even less than before because they’re only being shown the things that the algorithm believes they want to see. So, I’m really concerned with that increasing dominance of these organisations that commercial interests will increasingly drive knowledge creation 
" KPLEX interviewee

Pasquale (2015) describes Google and Apple as ‘the Walmarts of the information economy, in that they ‘habituate users to value the finding service itself over the sources of the things found’. The invisibilisation of provenance might be the most insidious effect of datafication because, when presented with irreconcilable knowledge claims, capacity to judge and choose between them will be diminished.

The unchecked permeation of commercial practices and values into every aspect of our engagement with commercial services that trade on the social is already highly questionable. When these practices and values are imported wholesale into public services, we really need a democratic braking system. This is why it’s crucial that any ‘advances’ in data infrastructure or practices are designed with the involvement of, if not led by, the people who have the closest relationship to the data, rather than those with transferable technical expertise. This cannot be done without exploding the myth that the challenges society faces are mere technical problems, and returning to an understanding of the social and an appreciation of the complexity of human stories.

Nicola Horsley is a research fellow at the Centre for Interdisciplinary Research in Citizenship, Education and Society (CIRCES), where she works on research concerned with education for democratic citizenship, human rights education and the social inclusion of migrants and refugees through educational processes.

The Trouble With Big Data: How Datafication Displaces Cultural Practices, by Jennifer Edmond, Nicola Horsley, Jörg Lehmann and Mike Priddy, is published by Bloomsbury.


Lack of transparency in privacy notices about children’s data

Privacy notices are there to tell users of services about how information about them is being collected, used and linked together. As a parent you may wish to know if information about your child is being linked together with other information about you/them and who can see this information. Whilst this seems straightforward, privacy notices are often opaque, referring to broad categories of data use and the specific ways data might be used and linked with other data can be hard to discern.

In 2023 the Government are changing the way they collect data about children who have an Education, Health and Care Plan or for whom there has been a request for a plan. The data they collect and use is changing from aggregated data (that does not identify individual children) to individual personal level data on every child.

The Department for Education have provided guidance for local authorities about how to write privacy notices on their websites to reflect this change.

However, in their suggested privacy notice they talk about the sharing of data but do not mention that children’s data may be linked to other data sources about families. This is what is written into the Department for Education  accompanying guidance document:

Person level data will enable a better understanding of the profile of children and young people with EHC plans and allow for more insightful reporting. The person information will allow for linking to other data sources to further enrich the data collected on those with EHC plans. DfE (2022) p.11

Local authorities will be required to pass personal level data about children to the Department for Education and yet it remains very unclear how they will use it.

Parents may also be forgiven for feeling concerned about the safety of their children’s information once it is passed on. The Information Commissioner’s Office has reported a serious breach in children’s data use in which children’s data held by the DfE was offered to gambling companies.

Department for Education reprimanded by ICO for children’s information data breach

The Department for Education (“DfE”) has been reprimanded by the ICO for a data breach arising from the unlawful processing of personal data, including children’s data contained in approximately 28 million records, between 2018 and 2020. The DfE had provided the screening company Trust Systems Software UK Ltd (“Trustopia”) with access to the Learning Records Service (“LRS”), a database containing pupil’s learning records used by schools and higher education institutions. Despite not being a provider of educational services, Trustopia was allowed access to the LRS and used the database for age verification services, which were offered to gambling companies (to confirm their customers were over 18).

The ICO determined that the DfE had failed to protect against the unauthorised processing of data contained in the LRS. As the data subjects were unaware of the processing and unable to object or withdraw consent to the processing, the ICO deemed that DfE had breached Article 5(1)(a) UK GDPR. Additionally, the DfE had failed to ensure the confidentiality of the data contained in the LRS in breach of DfE’s security obligations pursuant to Article 5(1)(f) UK GDPR.

In the reprimand the ICO noted that, but for the DfE being a public authority, the ICO would have fined the DfE just over ÂŁ10 million. The reprimand from the ICO sets out the remedial actions that the DfE needs to take to improve its compliance with the UK GDPR, including: (1) improving the transparency of the LRS so that data subjects are able to exercise their rights under the UK GDPR; and (2) reviewing internal security procedures to reduce the likelihood of further breaches in the future. The DfE has since removed access to the LRS for 2,600 of the 12,600 organisations which originally had access to the database.

See

https://ico.org.uk/media/action-weve-taken/4022280/dfe-reprimand-20221102.pdf

Department for Education (2022) Special educational needs person level survey 2023: guide.

https://www.gov.uk/government/publications/special-educational-needs-person-level-survey-2023-guide

Education data futures: book launch

Our research project is pleased to share details of the launch of a book which includes findings from our research project.

The Digital Futures Commission launch of Education Data Futures is being held on World Children’s Day, November 21, 2022.

The book, a collection of essays from regulators, specialists and academics working on the problems and possibilities of children’s education data, is being launched by Baroness Beeban Kidron and Sonia Livingstone who will be joined by a range of other guests.

Our project is delighted to have contributed a chapter to the book which outlines some of our findings about the extent to which parents from different social groups trust schools and other public services to share and electronically link data about their children and family. The chapter goes on to relate these to the wider social licence explanatory issues of legitimacy and suspicion, as well as the implications for government efforts to bring together and use administrative records from different sources.

We argue that government and public services need to engage in greater transparency and accountability to parents, enabling them to challenge and dissent from electronic merging of their data, but that efforts towards informing parents are likely to be received and judged quite differently among different social groups of parents.

The book is open access and after the launch will be downloadable from the Digital Data Commission’s website, where hard copies may also be ordered.

Governments’ use of automated decision-making systems reflects systemic issues of injustice and inequality

By Joanna Redden, Associate Professor, Information and Media Studies, Western University, Canada

In 2019, former UN Special Rapporteur Philip Alston said he was worried we were “stumbling zombie-like into a digital welfare dystopia.” He had been researching how government agencies around the world were turning to automated decision-making systems (ADS) to cut costs, increase efficiency and target resources. ADS are technical systems designed to help or replace human decision-making using algorithms.

Alston was worried for good reason. Research shows that ADS can be used in ways that discriminate, exacerbate inequality, infringe upon rights, sort people into different social groups, wrongly limit access to services and intensify surveillance.

For example, families have been bankrupted and forced into crises after being falsely accused of benefit fraud.

Researchers have identified how facial recognition systems and risk assessment tools are more likely to wrongly identify people with darker skin tones and women. These systems have already led to wrongful arrests and misinformed sentencing decisions.

Often, people only learn that they have been affected by an ADS application when one of two things happen: after things go wrong, as was the case with the A-levels scandal in the United Kingdom; or when controversies are made public, as was the case with uses of facial recognition technology in Canada and the United States.

Automated problems

Greater transparency, responsibility, accountability and public involvement in the design and use of ADS is important to protect people’s rights and privacy. There are three main reasons for this:

  1. these systems can cause a lot of harm;
  2. they are being introduced faster than necessary protections can be implemented, and;
  3. there is a lack of opportunity for those affected to make democratic decisions about if they should be used and if so, how they should be used.

Our latest research project, Automating Public Services: Learning from Cancelled Systems, provides findings aimed at helping prevent harm and contribute to meaningful debate and action. The report provides the first comprehensive overview of systems being cancelled across western democracies.

Researching the factors and rationales leading to cancellation of ADS systems helps us better understand their limits. In our report, we identified 61 ADS that were cancelled across Australia, Canada, Europe, New Zealand and the U.S. We present a detailed account of systems cancelled in the areas of fraud detection, child welfare and policing. Our findings demonstrate the importance of careful consideration and concern for equity.

Reasons for cancellation

There are a range of factors that influence decisions to cancel the uses of ADS. One of our most important findings is how often systems are cancelled because they are not as effective as expected. Another key finding is the significant role played by community mobilization and research, investigative reporting and legal action.

Our findings demonstrate there are competing understandings, visions and politics surrounding the use of ADS.

a table showing the factors influencing the decision to cancel and ADS system
There are a range of factors that influence decisions to cancel the uses of ADS systems. (Data Justice Lab), Author provided

Hopefully, our recommendations will lead to increased civic participation and improved oversight, accountability and harm prevention.

In the report, we point to widespread calls for governments to establish resourced ADS registers as a basic first step to greater transparency. Some countries such as the U.K., have stated plans to do so, while other countries like Canada have yet to move in this direction.

Our findings demonstrate that the use of ADS can lead to greater inequality and systemic injustice. This reinforces the need to be alert to how the use of ADS can create differential systems of advantage and disadvantage.

Accountability and transparency

ADS need to be developed with care and responsibility by meaningfully engaging with affected communities. There can be harmful consequences when government agencies do not engage the public in discussions about the appropriate use of ADS before implementation.

This engagement should include the option for community members to decide areas where they do not want ADS to be used. Examples of good government practice can include taking the time to ensure independent expert reviews and impact assessments that focus on equality and human rights are carried out.

a list of recommendations for governments using ADS systems
Governments can take several different approaches to implement ADS systems in a more accountable manner. (Data Justice Lab), Author provided

We recommend strengthening accountability for those wanting to implement ADS by requiring proof of accuracy, effectiveness and safety, as well as reviews of legality. At minimum, people should be able to find out if an ADS has used their data and, if necessary, have access to resources to challenge and redress wrong assessments.

There are a number of cases listed in our report where government agencies’ partnership with private companies to provide ADS services has presented problems. In one case, a government agency decided not to use a bail-setting system because the proprietary nature of the system meant that defendants and officials would not be able to understand why a decision was made, making an effective challenge impossible.

Government agencies need to have the resources and skills to thoroughly examine how they procure ADS systems.

A politics of care

All of these recommendations point to the importance of a politics of care. This requires those wanting to implement ADS to appreciate the complexities of people, communities and their rights.

Key questions need to be asked about how the uses of ADS lead to blind spots because of the way they increase the distancing between administrators and the people they are meant to serve through scoring and sorting systems that oversimplify, infer guilt, wrongly target and stereotype people through categorizations and quantifications.

Good practice, in terms of a politics of care, involves taking the time to carefully consider the potential impacts of ADS before implementation and being responsive to criticism, ensuring ongoing oversight and review, and seeking independent and community review.