AI in healthcare: an act of legal and ethical balance


[ad_1]

Paralegal Marie Le Frapper assesses different regulatory approaches to the use of artificial intelligence in healthcare to determine which one strikes the best balance between providing users with adequate protection and encouraging growth and investment

Artificial intelligence (AI) is undoubtedly a hot topic. It is also one of the most publicized and controversial areas of technological development. While we are still a long way from the robots that take over the earth, its use has become much more common thanks to improved analysis techniques and the increased availability of data.

The healthcare industry is one where the benefits of AI are undeniable, as this technology can do what humanity can already do more efficiently and much more like finding links in genetic codes. However, its use raises several legal and ethical questions that we must answer. This is why governments and international organizations are now focusing on creating a favorable regulatory framework for AI.

As with any new technological development, AI raises many questions that governments and populations must grapple with. Work on the subject is ongoing at all levels, including the UK and the EU. For many of us, AI and algorithms are a very opaque science that we are told is used for the benefit of all. However, it is also possible that the exact opposite is happening. Thus, in 2020, the high-level group of experts on AI appointed by the European Commission defined seven requirements considered essential to the ethical use of AI with an emphasis on transparency, security, fairness and accountability.

Data is what powers AI. Without it, machines could not learn to “think”. This is why the protection of patient medical data is paramount and is now a priority for industry and governments around the world. However, the healthcare sector, in particular, is at high risk of cyber threats due to the sensitive nature of patient medical data.

Since these data are at the forefront of scientific and technological innovation, the life sciences sector is worth several billion, it is also a very attractive target for cybercriminals. For example, the Irish Department of Health and the Health Service Executive were the target of a cyberattack earlier this year. This was a direct attack on critical infrastructure and resulted in the cancellation of non-urgent procedures and delays in processing. Likewise, in 2017, the NHS was disrupted by the “WannaCry” ransomware. Ensuring that public and private healthcare providers have the tools to protect patient data will increase trust and inspire many more people to share their medical information with organizations creating AI so that there is a database large enough for machine learning.

The framework surrounding data protection is constantly evolving. The Court of Justice of the European Union ruled last year in the Schrems II invalidate the Privacy Shield’s decision granting suitability to the United States. The move has a significant impact on transatlantic trade and data sharing while casting a shadow over the UK as the end of the transition period approaches. In the UK, the Supreme Court is to rule on the possibility of instituting opt-out class actions in data breach cases in the Lloyd vs. Google Case. As healthcare providers and organizations are prime targets, greater protection will be required due to the risk of potentially costly claims.

Greater public trust also leads to the provision of information from more diverse populations. We already know that some diseases do not manifest themselves in the same way depending on the ethnic origin of the patient. A very simple Example can be seen in an AI tool that was created to detect cancerous moles. In its early stages of development, AI will have been trained on a database mostly made up of images of white skin, meaning it will be less likely to find such cancerous patterns on darker skin.

You want to write for the Cheek Legal Journal?

Find out more

Another problem that arises from the use of AI is that of discrimination. The Dutch government used an algorithm named SyRI to detect possible welfare fraud based on criteria such as the amount of running water used by a household. However, access to information requests revealed that SyRI was mainly used in low-income neighborhoods, exacerbating stigma. Ultimately, the Dutch court in The Hague ruled that SyRI had violated Article 8 of the European Convention on Human Rights protecting the right to respect for private and family life. The benefits created by AI should not be obscured by biased machine learning that can be corrected with proper human oversight.

As AI democratizes and the above challenges become more evident, governments are focusing on creating a framework that balances creating an environment that is not only welcoming for companies in this field, such as life science organizations and pharmaceutical companies, but which also offers sufficient protection of our data.

The cash injections and investments made during the pandemic in the life sciences sector are expected to continue as the Prime Minister seeks to strengthen the UK’s role as a leading country in this sector. Since leaving the European Union, the British government has announced plan invest £ 14.9 billion in the year 2021/2022, reaching £ 22 billion by 2025 in research and development in the life sciences industry with a focus on technology.

In a draft policy document published on June 22, 2021 titled “Data saves lives: reshaping health and social services with data”, the Ministry of Health and Social Affairs has defined its plan for the future at a time when our health data is essential to the reopening of society. Chapters 5, 6 and 7 of this guidance document focus on empowering researchers with the data they need to develop life-saving treatments, developing the appropriate technical infrastructure, and supporting patients. developers and innovators to improve health and care with a focus on fostering innovations in AI. as well as creating a clear and understandable regulatory framework for AI. For example, changes were made to government guidelines on AI procurement encouraging NHS organizations to become stronger buyers and a commitment was made to develop unified standards for testing the efficacy and safety of drugs. AI solutions in close collaboration with the Health Products and Medicine Regulatory Agency and the National Institute for Health and Care Excellence by 2023.

Another initiative is the AI ​​in Health and Care Awards. During its first round, there were 42 winners, including companies such as Kheiron Medical Technologies for MIA “Mammography Intelligent Assessment”. MIA is deep learning software that was developed to solve challenges of the NHS breast cancer screening program, such as reducing missed diagnoses and tackling life-threatening delays. The use of such software has a significant impact on public health, saving lives through early diagnosis and reducing the cost of treatments offered by the NHS. Indeed, research has shown that about 20% of biopsies are performed unnecessarily.

Although the UK is no longer bound by EU law, developments in this sector taking place on the continent must be kept in view. In April 2021, the European Commission published draft regulation on the harmonization of rules relating to AI. Although it takes a risk-based approach to AI, it should be noted that the draft regulation prohibits the use of AI for social scoring by public authorities and real-time facial recognition (such as in the 2020 South Wales Police Bridges Case). Maximizing resources and coordinating investments is also an essential element of the European Commission’s strategy. Under the Digital Europe and Horizon Europe programs, the Commission also intends to invest € 1 billion per year in AI.

Moreover, now that the UK has been granted the adequacy, which means that the EU recognizes that the level of protection afforded to personal data in the UK is comparable to that offered by UK legislation. EU data may continue to flow between the two sides and significant discrepancies are unlikely. arise in the near future. Like the Covax initiatives, greater collaborations and the development of AI taught using databases comprised of both EU and UK data would not be surprising.

Governments and stakeholders are now seeing AI as the future of healthcare. While its use involves many ethical questions, the benefits are likely to outweigh the risks provided the regulatory framework around it provides both flexibility for innovators and stringent requirements protecting our medical data. The two approaches taken by the UK and the EU appear to focus on the same relatively non-contentious criteria. However, it appears the UK government is more willing to invest in the sector, riding the country’s reputation for genome sequencing. Upcoming developments in this area should be kept in view for anyone interested in new technologies, healthcare and data protection, as they promise exciting discussions.

Marie Le Frapper is a paralegal in the government legal department. She graduated in Law and French Law from the University of London and now hopes to land a training contract to become a lawyer.

[ad_2]

About Roberto Frank

Check Also

Plains blizzard heralds unusually cold weather for Lower 48

Comment this story Comment The first major winter storm of the season, which has ravaged …

Leave a Reply

Your email address will not be published.