Last updated: 
2 months 3 days ago
Blog Manager
One of Jisc’s activities is to monitor and, where possible, influence regulatory developments that affect us and our customer universities, colleges and schools as operators of large computer networks. Since Janet and its customer networks are classified by Ofcom as private networks, postings here are likely to concentrate on the regulation of those networks. Postings here are, to the best of our knowledge, accurate on the date they are made, but may well become out of date or unreliable at unpredictable times thereafter. Before taking action that may have legal consequences, you should talk to your own lawyers. NEW: To help navigate the many posts on the General Data Protection Regulation, I've classified them as most relevant to developing a GDPR compliance process, GDPR's effect on specific topics, or how the GDPR is being developed. Or you can just use my free GDPR project plan.

Group administrators:

Building blocks for trustworthy AI in education

Thursday, December 19, 2019 - 12:08

[UPDATE: my slides are now available]

This week I've been presenting at an event on Artificial Intelligence in Education, organised by the Finnish Government in their current role as EU Presidency. Specifically I was asked to look at where we might find building blocks for the ethical use of AI in education.

Looking at the EU High Level Experts Group list of Principles and Requirements for Trustworthy AI, it seems that the General Data Protection Regulation can be a significant building block. At least seven of their eleven "ethical" issues are already covered by the GDPR, which means both that ethics is a legal requirement, and that we have a lot of guidance both from the law itself and from regulators' guidance on how to implement it.

That seems to leave three areas: whether we should do things at all (the HLEG call this Societal and Environmental Wellbeing); whether we should let machines do it (though the GDPR does suggest some guidance on "automated decision making"); and, if so, how much and what kind of explanation of those decisions we should require. To make those a little more concrete we looked at three specific questions: should students be allowed to use AI to tell them which passages to study in order to pass their exams; should we let AI decide which students are accepted onto an over-subscribed course; and whether teachers need to have a complete understanding of how an AI predicts grades, or just the factors that affected each student's prediction. Answers to all of these turned out to be highly dependent on detail, confirming that we do, indeed, need some ethical principles on which to base our answers.

As to where we might find those, there are plenty of ethics codes available (one speaker thought around a hundred!); there may also be GDPR-based guidance (the UK Information Commissioner has just published a 150 page draft on explainability); and I suggested we might need to keep in mind the UN Human Rights Convention's statement of the purpose of education: "the full development of human personality".

In summing up, Ari Korhonen from Aalto University reminded us of the essential need to maintain trust, and suggested that it might be useful to review how we retain that when delegating tasks to other humans, for example using teaching assistants or external examiners.