Last updated: 
3 months 2 weeks ago
Blog Manager
One of Jisc’s activities is to monitor and, where possible, influence regulatory developments that affect us and our customer universities, colleges and schools as operators of large computer networks. Since Janet and its customer networks are classified by Ofcom as private networks, postings here are likely to concentrate on the regulation of those networks. Postings here are, to the best of our knowledge, accurate on the date they are made, but may well become out of date or unreliable at unpredictable times thereafter. Before taking action that may have legal consequences, you should talk to your own lawyers. NEW: To help navigate the many posts on the General Data Protection Regulation, I've classified them as most relevant to developing a GDPR compliance process, GDPR's effect on specific topics, or how the GDPR is being developed. Or you can just use my free GDPR project plan.

Group administrators:

How Should Education use AI?

Friday, May 1, 2020 - 11:56

In looking at the many ethical concerns that have been expressed about the use of Artificial Intelligence in education, it struck me that most fall at the two ends of a scale. On the one hand questions of human autonomy lead to concerns about cookie-cutter approaches, where AI treats every student according to a rigid formula; on the other hand questions about the social function of education raise concerns about hyper-personalisation where students cannot learn together as each one is doing different things at different times. As we move toward either of those extremes we seem to need a human to step in and say "no further".

But we don't worry – indeed we don't usually notice – when we use AI for things like spelling or grammar checks. So perhaps the best place to start using AI is in that middle ground, using AI as an assistant, and not as a decision-maker?

This can still be very powerful, making best use of AI's capacity to sift through immense quantities of unstructured input and suggest which parts of it may be relevant to an individual human's current needs. So, for example:

  • An AI might act as study-buddy, spotting when a student seems to have got stuck on a topic and suggesting "have you thought about it this way?" (or, particularly in current circumstances, when we should take a break from screen-staring or video-conferencing);
  • An AI might help with research, by suggesting a short-list of legal case reports or published papers that seem to be relevant to a particular area of study;
  • AI is already used to analyse student essays and highlight to tutors any passages that they should check for plagiarism;
  • AI could help with on-line invigilation, drawing humans' attention to unusual patterns that may need later investigation.

The spelling/grammar example may provide further guidance, that an AI assistant should always know, and signal, its own limitations. Where there is a "right" answer (spelling) it may be acceptable for it to act silently: where the rules are less clear, or where a human may reasonably choose to break them (grammar) it may be better for it to mark "this doesn’t look right to me" and offer alternatives for the human to consider (including, of course, "it's OK, I really did mean to start a paragraph with 'But'").

The human may be deliberately breaking the rules to make a point; or they may know about rules that the AI doesn't ("will this combination make a better conference programme?"); or they may be making a decision where they are expected to apply their discretion and instinct. Perfect consistency may not be the desired outcome of some processes: sometimes the same set of inputs should lead to different outputs, in ways that are very hard, or impossible, for an AI to understand.

This approach seems naturally to meet the All-Party Parliamentary Group on Data Analytics desire for AI to be used to promote the purpose of education, but leaving humans to decide – in any particular circumstance – what that purpose requires.