Data Protection Regulation

2 March 2018 at 9:55am
I've had a number of questions recently about how long help desks should keep personal data about the queries they receive. The correct answer is "as long as you need, and no longer". But I hope the following examples of why you might need to keep helpdesk tickets are more helpful than that bare statement:
2 March 2018 at 9:49am
Collections of free text – whether in database fields, documents or email archives – present a challenge both for operations and under data protection law. They may contain personal data but it's hard to find: whether you're trying to use it, to ensure compliance with the data protection principles, or to allow data subjects to exercise their legal rights. Some level of risk is unavoidable in these collections, but there are ways to reduce it.
28 February 2018 at 8:30am
Although the Article 29 Working Party seem to have had applications such as incident response in mind when drafting their guidance on exports, that guidance could also be helpful in the field of federated authentication.
23 February 2018 at 11:31am
When incident response teams (CSIRTs) detect an attack on their systems, they normally report details back to the network or organisation from which the attack comes. This can have two benefits for the reporter: in the short term, making the attack stop; in the longer term helping that organisation to improve the security of its systems so they are less likely to be used in future attacks.
20 February 2018 at 10:09am
The Article 29 Working Party's guidance on Breach Notification suggests some things we should do before a security breach occurs. The GDPR expects data controllers, within 72 hours of becoming aware of any security breach, to determine whether there is a risk to individuals and, if so, to report to the national Data Protection Authority. It seems unlikely that an organisation that hasn't prepared is going to be able to manage that.
16 February 2018 at 10:21am
Article 22 of the GDPR contains a new, and oddly-worded, "right not to be subject to a decision based solely on automated processing". This only applies to decisions that "produce[] legal effects … or similarly significantly affect[]" the individual. Last year, the Article 29 Working Party's draft guidance on interpreting this Article noted that an automated refusal to hire a bicycle – because of insufficient credit – might reach this threshold.
22 March 2018 at 1:48pm
[UPDATE] A recording of the webinar is now available The General Data Protection Regulation (GDPR) will require all organisations to examine their processing of personal data. Understanding why and how data are being processed, and what the appropriate legal basis is for the processing, will be essential if organisations are to meet the GDPR’s requirements for information provision and data subject rights.
2 February 2018 at 9:30am
In thinking about the legal arrangements for Jisc's learning analytics services we consciously postponed incorporating medical and other information that Article 9(1) of the General Data Protection Regulation (GDPR) classifies as Special Category Data (SCD): "personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person's sex life or sexual orientation" (mo
23 January 2018 at 11:26am
Reflecting on the scope chosen by Blackboard for our working group - "Ethical use of AI in Education" - it's worth considering what, if anything, makes education different as a venue for artificial intelligence. Education is, I think, different from commercial businesses because our measure of success should be what pupils/students achieve. Educational institutions should have the same goal as those they teach, unlike commercial settings where success is often a zero-sum game.
17 January 2018 at 3:54pm
One of the concerns commonly raised for Artificial Intelligence is that it may not be clear how a system reached its conclusion from the input data. The same could well be said of human decision makers: AI at least lets us choose an approach based on the kind of explainability we want. Discussions at last week's Ethical AI in HE meeting revealed several different options:
Subscribe to Data Protection Regulation