Back in May, Privacy News Online noted that the legal basis of a data-sharing agreement between Google’s AI subsidiary DeepMind and the UK’s National Health Service (NHS) was suspected to be “inappropriate”. That rather vague judgement has now been clarified, with a ruling from the UK’s independent body created to uphold information rights, the Information Commissioner’s Office (ICO), that the hospital supplying the data had failed to comply with the UK’s Data Protection Act when it provided patient details to DeepMind.
Specifically, the ICO found that patients were not adequately informed that their records would be processed for the purpose of clinical safety testing, and it was unconvinced that analyzing 1.6 million patient records was “necessary and proportionate.” Despite the clear data processing breach, the NHS hospital and DeepMind were let off with little more than a slap on the wrist: they were simply required to comply with the current UK law in the future, and also to carry out a privacy impact assessment. DeepMind’s response raises a number of interesting points, which apply more generally to projects involving highly personal patient data:
“In our determination to achieve quick impact when this work started in 2015, we underestimated the complexity of the NHS and of the rules around patient data, as well as the potential fears about a well-known tech company working in health. We were almost exclusively focused on building tools that nurses and doctors wanted, and thought of our work as technology for clinicians rather than something that needed to be accountable to and shaped by patients, the public and the NHS as a whole. We got that wrong, and we need to do better.”
Those concerns are even more pertinent in the light of a major announcement this week that the UK’s NHS intends to