Last updated: 
2 months 3 days ago
Blog Manager
One of Jisc’s activities is to monitor and, where possible, influence regulatory developments that affect us and our customer universities, colleges and schools as operators of large computer networks. Since Janet and its customer networks are classified by Ofcom as private networks, postings here are likely to concentrate on the regulation of those networks. Postings here are, to the best of our knowledge, accurate on the date they are made, but may well become out of date or unreliable at unpredictable times thereafter. Before taking action that may have legal consequences, you should talk to your own lawyers. NEW: To help navigate the many posts on the General Data Protection Regulation, I've classified them as most relevant to developing a GDPR compliance process, GDPR's effect on specific topics, or how the GDPR is being developed. Or you can just use my free GDPR project plan.

Group administrators:

AI in Education: is it different?

Reflecting on the scope chosen by Blackboard for our working group - "Ethical use of AI in Education" - it's worth considering what, if anything, makes education different as a venue for artificial intelligence. Education is, I think, different from commercial businesses because our measure of success should be what pupils/students achieve. Educational institutions should have the same goal as those they teach, unlike commercial settings where success is often a zero-sum game. We should be using AI to achieve value for those who use our services, not from them. Similarly, we should be looking to AI as a way to help tutors do their jobs to the best of their ability. AI is good at large-scale and repetitive tasks – it doesn't get tired, bored, or grumpy. Well-used AI should help both learners and teachers to concentrate on the things that humans do best.

Clearly there are also risks in using AI in education – there would be little for an ethics working group to discuss if there weren't! The technology could be deployed for inappropriate purposes or in ways that are unfair to students, tutors, or both. The current stress on using AI only to "prevent failure" feels a bit close to these lines: if we can use AI to help all students and tutors improve then they won't presume that any notification from the system is bad news. Getting this right is mostly about purposes and processes. However there's also a risk of AI too closely mimicking human behaviour: poorly-chosen training sets can result in algorithms that reproduce existing human and systemic pre-conceptions; too great a reliance on student feedback could result in algorithms delivering what gives students an easy life, rather than what will help them achieve their potential. An AI that never produces unexpected results is probably worth close examination to see if it has fallen into these traps.

Computers work best when presented with clear binary rules: this course of action is acceptable, that one isn't. However the rules provided by the legal system rarely provide that. Laws are often vague about where lines are drawn, with legislators happy to leave to courts the question of how to apply them to particular situations. As Kroll et al point out, when laws are implemented in AI systems, those decisions on interpretation will instead be made by programmers – something that we should probably be less comfortable about (p61). Conversely, laws may demand rules that are incomprehensible to an AI system: for example European discrimination law prohibits an AI from setting different insurance premiums for men and women even if those are what the input data demand. Finally, and particularly in education, we may well be asking AI systems to make decisions where society has not yet decided what actions are acceptable: how should we handle data from a student that tells us about their tutor or parent? when is it OK to for charities to target donors based on their likely size of donations? when should a college to recommend an easier course to a borderline student?

Lots to discuss...