Redefining Care: Advocating AI in Medicine Amidst Ethical Concerns

The rise of AI in healthcare seems now inevitable, its use continuing to grow at a stark rate [1], which has raised serious concerns about the ethics of its applications. Privacy issues, discriminatory bias, and lack of transparency rightfully cause worries among both decision-makers and the public [2] [3] [4]. However, while undoubtedly important, these issues are ultimately addressable and should not stand in the way of benefitting from the many ways AI can improve and save our lives. Regulatory, legal, and technical solutions have helped navigate radical changes in medicine in the past and, while AI presents unique challenges, it is ultimately in our power to apply it in a way that aligns with our rights and moral values. The benefits of AI interventions in healthcare have already manifested in concrete ways: IBM’s Watson work in ontology, Microsoft’s Hanover system to study cutting edge research and recommend treatments to patients, and Google’s DeepMind work with biometric data to analyse health risks, are shining examples of the concrete to patients that AI has already created.

Results of a Convolutional Neural Network algorithm identifying brain tumours through MRI images. From left to right are the MRI scan, manual labelling by doctors, and results of the algorithm. [5]

The medical field has for long met the conditions for a disproportionate impact by AI applications: highly technical decision-making requiring hundreds of datapoints, extremely qualified and costly professionals, a permanently higher demand than supply for these skills, and the immense market size that allows companies to invest billions in the development of effective tools for treatment and medical administration. The results from these investments are far from being frivolous; uses of AI in medicine range from treatment modelling, diagnostics, and precision surgery, to communication and information delivery to patients, active monitoring, and administrative tasks [6], as well as extending access to healthcare to disadvantaged areas which might otherwise not have access to medical professionals [7]. 

Innovation in the field of medical AI is progressing at a breakneck pace, with examples of applications being more than abundant. Diagnostics is a field where machine learning is a natural fit, due to its pattern-finding nature. Patient’s data, like vital signs and bio-signals, demographic information, medical history and laboratory results, allows algorithms to make accurate diagnosis of specific conditions through learning from large bodies of data. An example of this is Linus Health’s dementia detection tools. The company provides an AI-enhanced digital platform that seeks not to replace but to augment human clinical expertise, being able to assess multiple data points to gauge a patient’s likelihood of suffering from early signs of certain conditions [8]. 

Other disciplines are also in the process of being changed by AI interventions. Companies like Vicarious Surgical provide robotic surgery tools that allow for unprecedented levels of control and visualisation within a patient’s body. These robots are assisted by AI to augment their capabilities with computer vision and computer-enhanced movement [9]. 

Vicarious Surgical’s robots make use of AI to enhance their vision and replicate natural, smooth movements with their appendages.

Companies are also looking to use AI to improve our approach to mental health. Spring Health utilises machine learning to provide patients with a personalised care plan that might include contacting adequate therapists — also chosen algorithmically —, as well as offering self-guided exercises and evidence-based recommendations. The machine-learning models applied by this platform can use hundreds of thousands of personal data points, including socio-demographic data, family history, diagnostic criteria based on DSM-V and social determinants of health. In combination with extensive datasets, tools like this are highly promising in the current landscape, where medicine can be too “one-fits-all”, and the supply of professionals might not be enough to properly care for all patients in an adequately personal manner [10].

However, if we wish to keep and expand on all these benefits, and develop the field of medicine in a way that continues to benefit us all, it is vitally important that the issues raised by professionals are addressed, understood, and collaboratively solved. The most notable potential issues with implementations of AI in any industry are undeniably magnified when it comes to healthcare. Where people’s life and limb are at stake, understanding the decisions made by systems, ensuring fairness, and protecting patient’s privacy become more paramount than usual. 

With the advent and widespread use of deep learning algorithms, transparency and explainability has become a key issue in this area. Being able to understand medical decisions made by AI is crucial when explaining them to patients, and many algorithms like deep neural networks are completely opaque in this regard [11]. These black-box systems hold troubled implications for the medical field, and studies find a majority of the public feels uncomfortable with AI making such decisions [11], [12].

Possibly the most prominent concern expressed in the media around the utilisation of AI in healthcare is discriminatory biases, and unfairness in the algorithms. While this is an issue that supersedes AI itself, having been widely present in healthcare for long, algorithmic discrimination has the potential of perpetuating and furthering existing biases at a large scale. 

Examples abound, with AI systems having been found to assume ethnic minorities are at lesser risk for certain conditions due to these groups of people having less resources to spend on healthcare, and therefore, less representation in medical data. In combination with the lack of explainability of models, which means decisions made based on race, gender, sexuality, or other factors of potential discrimination can be hard to identify and curb, this can lead to highly unethical models that perpetuate disparities in society [13].

In addition, issues of privacy and anonymity are almost always of concern when AI applications are deployed, but they present a particular challenge for systems in healthcare, as often the data needed to train the model is inherently private and identifiable. Medical AI needs to face the potentially contradictory prospect of utilising vast quantities of sensitive data to provide predictions about people’s healths, while maintaining all data unidentifiable and confidential [14]. 

Google’s DeepMind Health is an example of a breach in privacy in the handling of medical data by an AI company, from The Verge [15].

Addressing these issues will likely prove this industry’s biggest challenge, far more than any technical obstacles along the way. In many ways, some of them are intrinsic to the AI approach; Discriminatory bias reflects real-world biases held by societies, and perpetuated in the medical system. On the one hand, this means that avoiding it entirely would likely prove almost impossible until the underlying factors that cause these disparities are curtailed. On the other hand, these biases are held and perpetrated already by human doctors, decision-makers, and other medical professionals. AI need not be perfect in this regard, only better. And while we do observe egregious examples of such discrimination in both AI and traditional medical practice, algorithms afford us much more control with which to take steps to solve the issue, by building fairness and inclusion into the system themselves. This can take the form of applying corrections to biased algorithms, which is already a proven practice, or replacing them with more carefully crafted systems.

Similarly, lack of explainability is endemic to modern machine learning approaches, which means a certain level of transparency may be unachievable. Given this, it will prove useful to recontextualise our basis of trust in medical decisions, focusing more on reliability and past effectiveness of the decision-making algorithms, than on any notion of reasoning behind particular decisions. As the amount of data required to approach medical decision-making grows beyond what human doctors can apprehend, some authors argue that transparency is not always necessary, and reliability can be enough of a foundation for trust, provided the decisions made by algorithms are deliberated upon and potentially corrected by humans, in a way that takes into account our values [11].

The concerns surrounding privacy and anonymity are perhaps the ones we have more concrete tools to address. Systems designed to share sensitive medical data while maintaining confidentiality and accountability already exist, and similar solutions could be implemented for developers working in AI systems. 

NHS’s Secure Boundary protects and ensures the secure sharing of confidential medical data.

The UK’s NHS implements a system called Secure Boundary [16], which protects traffic between and from NHS systems, while still allowing for sharing data with relevant organisations. Applications just as this, combined with known methods for anonymising data, and regulations that ensure AI companies’ compliance with data security requirements, can help assuage concerns and enable the development of AI that does not violate the rights of both the patients whose data is trained on, and those whom it helps.

It is evident, when considering how to tackle these concerns, that the ethical issues facing AI in healthcare are far from trivial. Severe hurdles await developers, healthcare professionals, and corporate decision-makers, before achieving a holistic framework for ethical implementations of AI in such a sensitive field as this. Nevertheless, medical AI offers astounding benefits, and the opportunity to revolutionise medical practices to improve patient care and treatment, develop extraordinarily efficient diagnostic tools, precision surgery, personalised care plans, and many other auspicious technologies. These innovations hold the promise of addressing the persistent challenges in the healthcare system, such as the pervasiveness of misdiagnoses and the need for more personalised treatment approaches.

As AI becomes increasingly prevalent in healthcare, it becomes crucial to address these issues head on. Efforts to mitigate these ethical challenges should focus on developing ethical regulatory frameworks that establish fairness, inclusivity, and patient welfare as a priority, as well as technical solutions that ensure these guidelines are being met. At the same time, it is equally relevant to welcome and encourage the transformative impact that AI can have on medical practices without unnecessary hindrance. Striking this balance requires interdisciplinary collaboration: healthcare professionals, technologists, ethicists, and policymakers should all be actively involved in shaping a safe environment for the future development of this breakthrough technology. By doing so, it is possible to create a future where AI complements human expertise, contributing to a healthcare system that is both technologically advanced and ethically sound.

References

[1] Global Market Insights. (2023). Artificial Intelligence (AI) in Healthcare Market [https://www.precedenceresearch.com/artificial-intelligence-in-healthcare-market]

[2] Tang L, Li J, Fantus S. Medical artificial intelligence ethics: A systematic review of empirical studies. DIGITAL HEALTH. 2023;9. [https://doi.org/10.1177/205520762311860]

[3] Toosi Saidy, N. Artificial intelligence in healthcare: Opportunities and challenges [https://www.youtube.com/watch?v=uvqDTbusdUU]. TEDxQUT.

[4] Mathur, V. Artificial Intelligence in Healthcare – The Need for Ethics [https://www.ted.com/talks/varoon_mathur_artificial_intelligence_in_healthcare_the_need_for_ethics]. TEDxUBC.

[5] Havaei, M., Davy, A., Warde-Farley, D., Biard, A., Courville, A., Bengio, Y., Pal, C., Jodoin, P-M., & Larochelle, H. (2017). Brain tumour segmentation with Deep Neural Networks. Medical Image Analysis, 35, 18-31. [https://doi.org/10.1016/j.media.2016.05.004]

[6] Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019 Jun;6(2):94-98.  [10.7861/futurehosp.6-2-94]

[7] Guo J, Li B. The Application of Medical Artificial Intelligence Technology in Rural Areas of Developing Countries. Health Equity. 2018 Aug 1;2(1):174-181. [10.1089/heq.2018.0037]

[8] Linus Health. [https://linushealth.com/]

[9] Vicarious Surgical. [https://www.vicarioussurgical.com/]

[10] Spring Health. [https://www.springhealth.com/]

[11] Durán JM, Jongsma KR. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. J Med Ethics. 2021 Mar 18:medethics-2020-106820. [10.1136/medethics-2020-106820]

[12] Tyson, A., Pasquini, G., Spencer, A., & Funk, C. (Year). 60% of Americans Would Be Uncomfortable With Provider Relying on AI in Their Own Health Care. Pew Research Center. [https://www.pewresearch.org/science/2023/02/22/60-of-americans-would-be-uncomfortable-with-provider-relying-on-ai-in-their-own-health-care/]

[13] Hoffman S. The Emerging Hazard of AI-Related Health Care Discrimination. Hastings Cent Rep. 2021 Jan;51(1):8-9.  [10.1002/hast.1203]

[14] Vellido A. Societal Issues Concerning the Application of Artificial Intelligence in Medicine. Kidney Dis (Basel). 2019 Feb;5(1):11-17. [10.1159/000492428]

[15] Google’s DeepMind made ‘inexcusable’ errors handling UK health data, says report. [https://www.theverge.com/2017/3/16/14932764/deepmind-google-uk-nhs-health-data-analysis]

[16] Cyber and data security services and resources. [https://digital.nhs.uk/cyber-and-data-security/services?area=managing-security]

Leave a Reply

Your email address will not be published. Required fields are marked *

Human & Machine

Digital Sugar: Consequences of unethical recommender systems

Introduction We are spending more and more time online. The average internet user spends over 2 hours on social networking platforms daily. These platforms are powered by recommendation systems, complex algorithms that use machine learning to determine what content should be shown to the user based on their personal data and usage history. In the […]

Read More
Power & Inequality

Artificial Intelligence: A Colonial Tool that can be De-Colonized

By Elmamoune Bouchareb and Maria Ismini Katsouli Western colonial domination over the Global South occurred betweenthe 15th and 20th centuries, during which European countries exertedpower and dominance over regions in Africa, Asia and the Americas. European colonial powers accumulated vast amounts of resources and wealth through the extractive appropriation of colonized lands. Extracting natural resources […]

Read More
Power & Inequality

Artificial Intelligence: a Dividing Force

On March 22nd 2023 an open letter urged all AI laboratories to ‘immediately pause for at least six months the training of AI systems more powerful than GPT-4. This letter was signed by technological stalwarts like Elon Musk, Apple co-founder Steve Wozniak and Stability AI CEO Emad Mostaque along with thousands of other people including […]

Read More