AI and health: Reaping rewards while addressing ethical concerns
Roberto Viola is Director General of DG CONNECT (Directorate General of Communication, Networks, Content and Technology) at the European Commission. He was the Deputy Director-General of DG CONNECT from 2012 to 2015.
Roberto Viola, Director General of DG CONNECT, answers our questions about how to regulate Artificial Intelligence in healthcare, in order to make sure Europeans benefit from its potential without risks to their privacy, security or safety.
Q: What, in your view, are the big opportunities in AI and health, that will make life better for patients and citizens?
A: AI is about to revolutionise healthcare, and will bring significant benefits to patients and citizens. AI can help us to improve disease prevention, early detection, diagnoses and treatment, while empowering citizens to better manage their health. For example, AI systems have already reached impressive levels of accuracy in detecting breast and skin cancers, diabetic retinopathy and cardiovascular diseases.
When applied to genomics, AI will enable switching from generic, population-wide medicine to personalised medicine, helping people receive the right drug at the right time and in the right doses. This could even help us to find long-awaited cures to cancers and rare diseases. AI can also speed up the process of developing and testing drugs and can also make predictions about how a patient will respond to a certain medicine and thus anticipate and avoid serious adverse reactions.
Together with other digital tools, AI can also help people to better monitor and manage their condition, adapt their lifestyle and interact more effectively with doctors and carers. For example, AI assistants, smart walkers and assistive robots can play a key role in supporting the physical, emotional, social, and mental health of elders.
The European Commission is investing significantly in AI technologies and applications in the field of health and care. For example, we support initiatives and projects seeking to interconnect health datasets, including genomic repositories and medical images databases that will allow the developing of AI technologies for improving medical diagnosis, disease prediction, treatment and follow-up.
Q: What have been the reactions so far on the guidance on ethics in AI that the European Commission’s High-Level Group issued last April?
A: When the first draft of the ethics guidelines were published for comments in December 2018, the group received over 500 comments which they considered for the final April publication. The guidelines have attracted a great deal of attention from different types of organisations, not just from Europe but also from elsewhere in the world. In June, the group then launched a large-scale piloting phase to test the assessment list for trustworthy AI. More than 500 organisations have since registered to participate in the pilot and anyone working on AI that would still like to take part can do so until 1 December.
Q: What challenges have you encountered putting these guidelines together?
A: The final guidelines are the result of bringing together the views of 52 experts from various, and sometimes diverging, disciplines. With this number of experts, the process of course took some time, but the exchanges were very fruitful and produced a very balanced outcome.
Q: Do you recognise any existing frameworks – in Europe or around the globe – that have been informing the proposed guidelines and that you consider good practice?
A: The Ethics Guidelines for Trustworthy AI are unique in covering the whole span from ethical principles via requirements to their implementation. The expert group refers in its guidelines to the work carried out by EGE, the European Group on Ethics in Science and New Technologies, which proposed nine basic ethical principles based on the fundamental values laid down in the European Union Treaties and Charter. With this background the High-Level Expert Group on AI worked out four ethical principles, seven key requirements to accomplish trustworthy AI, and an assessment list to help put them into practice.
Q: How can Europe be at the forefront of AI and health, and what is the role of these guidelines or possible regulation?
A: The EU has launched a strategy on AI that that focuses on boosting AI development and uptake, while addressing socioeconomic and ethical aspects. One of the aims of the guidelines is indeed to help increase trust in AI technologies and to ensure that they are fit-for-purpose, safe and fair, and we are also monitoring the regulatory frameworks dealing with data protection, cybersecurity, safety and liability to see whether and where they may need to be adapted in line with developments in AI.
If we want to be a leader in AI and health it is clear that we need to invest further in both sectors, for example looking at where investment can be best targeted to support the shift from hospital-centred health systems to more community-based and integrated care structures. At the EU level, this investment will continue to come from the Horizon Europe research programme, building on the work of the current Horizon 2020 programme and focusing in particular on developing and exploiting the potential of AI technologies in key areas such as health and care. Further investment should also come from the new Digital Europe programme designed to support the digital transformation of society, of which the health and care sector is an important part. This programme will be instrumental in building capacities in key digital technologies such as AI and cybersecurity. For example, it will support establishing world-reference testing facilities for AI and support the development of digital skills among clinical staff and managers in the health and care sector.
Q: The EC’s incoming President Ursula von der Leyen will put forward AI legislation in the first 100 days of her mandate. Can you give us an indication of what is being considered ahead of this? What kind of input from stakeholders would be useful at this stage?
A: Without trust, citizens will be reluctant to use AI products. But I believe that over time, ethical AI has the potential to give Europe a real competitive advantage, as customers in other markets will prefer applications that respect their fundamental rights.
So we are reflecting on the steps needed to ensure that AI is developed and employed in an ethical manner, and naturally the key requirements set out in the ethics guidelines published by the High-Level Expert Group on AI can inform future policy making.
Of course, in considering legislation one has to be mindful of the burden it could impose on companies, especially on SMEs. We want to strike the right balance between innovation and protection. We are fully committed to minimising any potential negative side effects of any future legislation on AI, as we are with any legislative proposal.
Q: A recent study from the UK has hinted that the public may be resistant to the use of AI in health - do you think we have the balance right between seizing the opportunities of AI and privacy concerns?
A: As with any powerful technology, AI raises a number of genuine concerns about, for example, privacy, security, safety and fairness. For example, feeding AI algorithms with insufficient, incomplete or biased data can lead to unreliable, unsafe or biased conclusions with grave consequences for citizens and patients.
We need to make sure that citizens’ needs are at the centre of data-driven healthcare innovation and that citizens themselves are in control of their own health data. Citizens need to trust that their personal data is protected and that their privacy is assured. Only by ensuring that digital solutions are secure, safe and fair, can we expect patients and healthcare professionals to trust and use them. The General Data Protection Regulation (GDPR) has helped significantly in this respect, introducing strict conditions with regard to consent to the use of data, for example, or clarifying the right of access to personal data concerning health by citizens. It is important to note that respect for privacy and adequate data governance is one of the seven key requirements identified in the Ethics Guidelines for Trustworthy AI, which all AI systems should meet in order to be trustworthy.
The Commission is also funding research and innovation projects developing innovative data technology solutions aimed at improving the usability of health data, facilitate interoperability and enhance data privacy.