You are here
- Home
- Regulatory Developments
- Blogs
- Explaining AI algorithms
Group administrators:
Recent members:
Explaining AI algorithms
One of the concerns commonly raised for Artificial Intelligence is that it may not be clear how a system reached its conclusion from the input data. The same could well be said of human decision makers: AI at least lets us choose an approach based on the kind of explainability we want. Discussions at last week's Ethical AI in HE meeting revealed several different options:
- When we are making decisions such as awarding bursaries to students, regulators may well want to know in advance that those decisions will always be made fairly, based on the data available to them. This kind of ex ante explainability seems likely to be the most demanding, probably restricting the choice of algorithm to those using known (and meaningful to humans) parameters to convert inputs to outputs;
- Conversely for decisions such as which course to recommend to a student, the focus is likely to be explaining to the individual affected which characteristics led to that decision being reached. Here it may be possible to use more complex models, so long as it's possible to perform some sort of retrospective sensitivity analysis (for example using the LIME approach) to discover which characteristics of the particular individual had most weight in the recommendation that was provided for them;
- A variant of the previous type occurs where a student's future performance has been predicted and they, and their teachers, want to know how to improve it. This is likely to require a combination of information from the algorithm with human knowledge about the individual and their progress;
- Finally there are algorithms – for example deciding which applicants are shown social medial adverts – where the only test of the algorithm is whether it delivers the planned results and we don't care how it achieved that.
Explainability won't be the only factor in our choice of algorithms: speed and accuracy are obvious other factors. But it may well carry some weight in deciding the most appropriate techniques to use in particular applications.
Finally it's interesting to compare these requirements of the educational context with the "right to explanation" contained in the General Data Protection Regulation and discussed on page 14 of in the Article 29 Working Party's draft Guidance. It seems that the education's requirements for explainability may be significantly wider and more complex.
Comments
Related to this, I've just found a fascinating paper that investigated how people like algorithms to be explained. Well worth reading the whole thing (preprint is available as open access), as there are some variations between the scenarios tested. But it seems that telling the individual what needs to change, and by how much, to change the outcome is often helpful. Explaining that the algorithm's statistics are sound doesn't seem to be.
Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao and Nigel Shadbolt (2018) 'It's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions. ACM Conference on Human Factors in Computing Systems (CHI'18), April 21–26, Montreal, Canada. doi: 10.1145/3173574.3173951