In looking at the many ethical concerns that have been expressed about the use of Artificial Intelligence in education, it struck me that most fall at the two ends of a scale.
The EU High-Level Expert Group's (HLEG) draft Ethics Guidelines for Trustworthy AI contains four principles and, derived from them, seven requirements for AI systems.
[What I meant to say at the Westminster e-Forum on Immersive Technologies]
[UPDATE: my slides are now available]
This week I've been presenting at an event on Artificial Intelligence in Education, organised by the Finnish Government in their current role as EU Presidency. Specifically I was asked to look at where we might find building blocks for the ethical use of AI in education.
To my ex-programmer ears, phrases like "web 2.0" and "industry 4.0" always sound a bit odd. Sectors don’t have release dates, unlike Windows 10, iOS 12 or Android Oreo. Oddly, one field that does have major version releases is the law: it would be quite reasonable to view 25th May 2018 as the launch of Data Protection 3.0 in the UK. Looking at past release cycles, it seems likely to be fifteen to twenty years before we see version 4.0.
Earlier this week I did a presentation to a group from Dutch Universities on the ethics work that Jisc has done alongside its studies, pilots and services on the use of data.
Reflecting on the scope chosen by Blackboard for our working group - "Ethical use of AI in Education" - it's worth considering what, if anything, makes education different as a venue for artificial intelligence. Education is, I think, different from commercial businesses because our measure of success should be what pupils/students achieve. Educational institutions should have the same goal as those they teach, unlike commercial settings where success is often a zero-sum game.
Last week I was invited to a fascinating discussion on ethical use of artificial intelligence in higher education, hosted by Blackboard. Obviously that's a huge topic, so I've been trying to come up with a way to divide it into smaller ones without too many overlaps. So far, it seems a division into three may be possible:
One of the concerns commonly raised for Artificial Intelligence is that it may not be clear how a system reached its conclusion from the input data. The same could well be said of human decision makers: AI at least lets us choose an approach based on the kind of explainability we want. Discussions at last week's Ethical AI in HE meeting revealed several different options: