In looking at the many ethical concerns that have been expressed about the use of Artificial Intelligence in education, it struck me that most fall at the two ends of a scale.
The EU High-Level Expert Group's (HLEG) draft Ethics Guidelines for Trustworthy AI contains four principles and, derived from them, seven requirements for AI systems.
[UPDATE: my slides are now available]
This week I've been presenting at an event on Artificial Intelligence in Education, organised by the Finnish Government in their current role as EU Presidency. Specifically I was asked to look at where we might find building blocks for the ethical use of AI in education.
Reflecting on the scope chosen by Blackboard for our working group - "Ethical use of AI in Education" - it's worth considering what, if anything, makes education different as a venue for artificial intelligence. Education is, I think, different from commercial businesses because our measure of success should be what pupils/students achieve. Educational institutions should have the same goal as those they teach, unlike commercial settings where success is often a zero-sum game.
Last week I was invited to a fascinating discussion on ethical use of artificial intelligence in higher education, hosted by Blackboard. Obviously that's a huge topic, so I've been trying to come up with a way to divide it into smaller ones without too many overlaps. So far, it seems a division into three may be possible:
One of the concerns commonly raised for Artificial Intelligence is that it may not be clear how a system reached its conclusion from the input data. The same could well be said of human decision makers: AI at least lets us choose an approach based on the kind of explainability we want. Discussions at last week's Ethical AI in HE meeting revealed several different options:
