Ethical Considerations and Challenges in the Use of AI in Biomedical Research and Patient Care
Murad Ali Khan1* and Muhammad Faseeh2
1Department of Computer Engineering, Jeju National University, Jeju 63243, Republic of Korea
2Department of Electronics Engineering, Jeju National University, Jeju, 63243, Republic of Korea
Submission: January 08, 2025;Published: January 23,2025
*Corresponding author: Murad Ali Khan, Department of Computer Engineering, Jeju National University, Jeju 63243, Republic of Korea
How to cite this article: Murad Ali K, Muhammad F. Ethical Considerations and Challenges in the Use of AI in Biomedical Research and Patient Care. Biostat Biom Open Access J. 2025; 12(1): 555830.DOI: 10.19080/BBOAJ.2025.12.555830
Abstract
The integration of AI into biomedical research and clinical care offers new possibilities for the improvement of patient outcomes and facilitation of medical practices. On the other hand, this technological development also goes along with important ethical and legal difficulties that raise a potential threat to the well-being and privacy of patients. Major ethical considerations include dealing with huge volumes of sensitive information, issues of equity and bias in the AI algorithms of decision-making, and how AI decision-making should remain transparent. Some legal concerns are defining accountability within the scope of malfunctioning AI, conformance with ever-stricter compliance, and a reexamination of informed consent forms in light of all novelties introduced by AI. This review critically discusses these issues, focusing on current strategies and proposing robust frameworks for ethical governance and legal compliance. We go on to discuss practical recommendations for overcoming these challenges through a detailed literature review, calling for continuous interdisciplinary collaboration and dynamic policy-making to keep pace with technological advancements. The ultimate goal is to leverage AI’s potential in healthcare while safeguarding patient rights and trust.
Keywords: Ethical AI; Healthcare Privacy; Algorithmic Bias; AI Regulation; Patient Consent
Background
The rapid translation of Artificial Intelligence into biomedical research and clinical practice has opened great vistas toward an improved quality of patient care, diagnostic acumen, and treatment personalization [1]. But this technological revolution brings in its wake a host of ethical challenges unlike any previously envisioned in magnitude and complexity [2]. With AI systems increasingly capable of making decisions that directly affect human life, there are rising concerns about privacy, autonomy, and possible bias [3]. These ethical challenges require detailed consideration if AI is to be responsibly and equitably used in healthcare settings [4].
One of the major ethical issues is related to handling patient data from which training of artificial intelligence models is derived. The confidentiality and security of sensitive medical information are of great importance, while the insatiable appetites for data at the back of AI pose risks to privacy if not properly managed [5]. Data breaches, unauthorized access, and the potential misuse of data demonstrate a need for tight security and equally transparent data governing policies [6]. Furthermore, the question of consent in times of AI adds to the complication of this ethical landscape because traditional consent forms may not be enough to encompass the scope of AI-driven data use [7].
Another important ethical issue is the potential perpetuation-increasing bias in healthcare delivery using AI [8]. AI operates only as good as the data trained on, and if any historical bias is reflected in such a dataset, algorithms become unconscious carriers of those biases to create discriminating practices in a very broad sense [9]. Fairness in AI applications requires conscious efforts toward developing algorithms that are not only effective but also free from biased outcomes that may harm vulnerable populations [10].
Lastly, the nature of AI, being obscure to outer scrutiny from transparency in AI’s decision-making process, hampers confidence-building in these technologies among either healthcare professionals or patients [11]. Essentially, most systems are a ‘black box’-it is difficult to gain insight into or understand the mechanisms of decision-making, further problematizing mechanisms of accountability crucial for clinical application [12]. Minimizing these identified challenges requires joined efforts on the part of technologists, ethicists, and regulators with a view toward accomplishing standards that instill assurance and confidence about AI applications for the protection and respect of human dignity and rights [13].
Literature Review
Ethical Frameworks and Governance
The development of ethical frameworks for AI in healthcare is essential to guide both current applications and future advancements. Literature indicates that such frameworks should be comprehensive, covering aspects from data acquisition to end-of-life decisions made by AI systems [14]. It is crucial that these frameworks are adaptive and inclusive, integrating feedback from a broad range of stakeholders including patients, healthcare providers, ethicists, and AI technologists [15]. This inclusivity ensures that the ethical guidelines are not only robust but also reflective of diverse societal values and norms [16]. Collaborative international efforts are also highlighted as critical, given the global nature of both technology providers and the potential patient base [17]. These frameworks should aim to standardize practices and ensure consistency in ethical AI use across different regions and cultures [18].
Privacy and Consent in Data Usage
Privacy concerns are at the forefront of ethical issues in AI healthcare applications. The literature emphasizes the complexity of managing large datasets while respecting individual privacy rights [19]. Robust encryption technologies and strict access controls are frequently cited as necessary measures to protect patient data [20]. Additionally, the literature calls for the reevaluation of consent processes in healthcare, suggesting dynamic consent models that allow patients to control their data continually [21]. These models could provide patients with transparent information about how their data is used and by whom, offering an opportunity to withdraw consent if desired [22]. This approach not only enhances patient autonomy but also aligns with ethical standards for confidentiality and respect for persons [23].
Addressing Bias and Ensuring Equity
Addressing bias in AI is a multifaceted challenge, as biases can be introduced at many stages, from data collection to algorithm development and application [24]. The literature stresses the importance of creating diverse datasets that accurately reflect the target population demographics [25]. It also discusses the development of algorithms that are designed to be fair by incorporating ethical considerations from the outset [26]. These efforts are crucial for preventing the perpetuation of inequalities and ensuring that AI tools benefit all segments of the population equally [27]. Furthermore, continuous monitoring and updating of AI systems are recommended to adapt to changing societal norms and medical practices, ensuring that AI applications remain fair over time [28].
(Table 1) provides a clear overview of how specific types of AI errors can lead to adverse outcomes in healthcare settings and offers practical solutions to mitigate these risks. This approach not only underscores the importance of ethical considerations in AI deployment but also promotes a proactive stance on preventing future incidents.

Legal Implications of AI in Healthcare
The deployment of AI technologies in healthcare not only raises ethical issues but also poses significant legal challenges that need to be addressed to ensure compliance and protect patient rights. As AI systems assume roles traditionally held by human practitioners, questions regarding liability and accountability become increasingly complex [29]. This section explores the legal landscape surrounding the use of AI in healthcare, highlighting major concerns and suggesting pathways for legal reform.
Liability for AI Errors
One of the primary legal concerns is determining liability in cases of AI errors that result in patient harm. Traditional legal frameworks are based on human decision-making, and adapting these to accommodate decisions made by machines is challenging [30]. The literature suggests a need for clear guidelines on whether liability should rest with the AI developers, the healthcare providers, or a combination of both [31]. Developing specific regulations that address the unique nature of AI decision-making processes will be crucial for maintaining trust and accountability in AI-driven healthcare services [32].
Regulatory Compliance
Ensuring that AI applications comply with existing healthcare regulations, such as HIPAA in the United States for patient data privacy, is another significant legal challenge [33]. AI systems must be designed to adhere to stringent standards of data security and patient confidentiality [34]. Furthermore, as AI technologies evolve, so too must the regulatory frameworks that govern their use, requiring continuous dialogue between technologists, legal experts, and regulators [35]. This dynamic regulatory landscape necessitates that healthcare organizations remain vigilant and proactive in understanding and implementing legal requirements [36].
Informed Consent
The concept of informed consent in healthcare is profoundly impacted by the introduction of AI. Patients must be fully informed about the involvement of AI in their care, including potential risks and benefits [37]. Legal standards for informed consent may need to be revised to address the complexities introduced by AI, ensuring that consent is not only informed but also comprehensible to patients who are not familiar with AI technology [38]. The literature calls for enhanced patient education efforts as part of the consent process to bridge the knowledge gap and empower patients in their healthcare decisions [39].
Conclusion
The integration of AI into both healthcare and biomedical research brings along a host of ethical and legal issues that have to be attended to cautiously so that this technology enhances, rather than detracts from, patient care. That being said, this review has pointed out some of the key issues to be addressed: data management should be improved so that it can provide better protection against intrusion of privacy, mechanisms that could eliminate biases from AI algorithms, and more transparency of AI systems instilled in healthcare professionals and patients.
It is important that in the future, the development and implementation of AI in healthcare will be informed by strong ethical frameworks and legal regulations. These must be updated on an ongoing basis to reflect advances in technology and changes in societal values. Future regulations must also be dynamic, allowing the rapid adaptation of laws to new applications as they arise. The path forward requires interdisciplinary collaboration at its finest. Ethicists, lawyers, technologists, and healthcare providers need to come together in guaranteeing the responsible deployment of AI tools.
Besides, public awareness and education regarding AI in health is urgently needed. Patients and the general public need to be made aware of where AI is being used, the potential benefits, and possible risks. This is again very important to gain and sustain public trust, as well as informed consent, in clinical settings where AI tools are employed. Lastly, with the continuous development of AI technology, so too should the methods to assess its effect on health equity. Future research should be channeled toward developing AI applications that are inclusive and fair, ensuring patient populations reap some benefits of AI in healthcare.
References
- Ahmed Zeeshan, Khalid Mohamed, Saman Zeeshan, XinQi Dong (2020) Artificial intelligence with multi-functional machine learning platform development for better healthcare and precision medicine. Database: baaa010.
- Anton Philip S, Richard Silberglitt, James Schneider (2001) The global technology revolution: bio/nano/materials trends and their synergies with information technology by 2015. Rand Corporation, United States, pp. 94.
- Scatiggio, Vittoria (2020) Tackling the issue of bias in artificial intelligence to design ai-driven fair and inclusive service systems. How human biases are breaching into ai algorithms, with severe impacts on individuals and societies, and what designers can do to face this phenomenon and change for the better.
- Badawy Walaa, Haithm Zinhom, Mostafa Shaban (2024) Navigating ethical considerations in the use of artificial intelligence for patient care: A systematic review. International Nursing Review.
- Sario A ul H (2024) AI Heals Colds: AI Revolutionizes Healthcare.
- Boppana, Venkat Raviteja (2021) Ethical Considerations in Managing PHI Data Governance during Cloud Migration. Educational Research 3(1): 191-203.
- Cohen I Glenn (2019) Informed consent and medical artificial intelligence: What to tell the patient?. Geo LJ 108: 1425.
- Hanna Matthew, Liron Pantanowitz, Brian Jackson, Octavia Palmer, Shyam Visweswaran, et al. (2024) Ethical and Bias Considerations in Artificial Intelligence (AI)/Machine Learning. Modern Pathology : 100686.
- Prince, Anya ER, Daniel Schwarcz (2019) Proxy discrimination in the age of artificial intelligence and big data. Iowa Law Review. 105: 1257.
- Modi Tejaskumar B (2023) Artificial Intelligence Ethics and Fairness: A study to address bias and fairness issues in AI systems, and the ethical implications of AI applications. Revista Review Index Journal of Multidisciplinary 3(2): 24-35.
- Eke, Christopher Ifeanyi, Liyana Shuib (2024) The role of explainability and transparency in fostering trust in AI healthcare systems: a systematic literature review, open issues and potential solutions. Neural Computing and Applications 113: 103655.
- Felder Ryan Marshall (2021) Coming to terms with the black box problem: how to justify AI systems in health care. Hastings Center Report 51(4): 38-45.
- Díaz-Rodríguez Natalia, Javier Del Ser, Mark Coeckelbergh, Marcos López de Prado, Enrique Herrera-Viedma, et al. (2023) Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Information Fusion 99: 101896.
- Polineni Tulasi Naga Subhash, Kiran Kumar Maguluri , Zakera Yasmeen , Andrew Edward, et al. (2022) AI-Driven Insights Into End-Of-Life Decision-Making: Ethical, Legal, And Clinical Perspectives On Leveraging Machine Learning To Improve Patient Autonomy And Palliative Care Outcomes. Migration Letters 19(6): 1159-1172.
- Khanna Shivansh, Ishank Khanna, Shraddha Srivastava, Vedica Pandey (2021) AI Governance Framework for Oncology: Ethical, Legal, and Practical Considerations. Quarterly Journal of Computational Technologies for Healthcare 6(8): 1-26.
- Pless Nicola, Thomas Maak (2004) Building an inclusive diversity culture: Principles, processes and practice. Journal of business ethics 54: 129-147.
- Trowman, Rebecca, Antonio Migliore, and Daniel A Ollendorf (2024) Designing collaborations involving health technology assessment: discussions and recommendations from the 2024 health technology assessment international global policy forum. International Journal of Technology Assessment in Health Care 40(1): e41.
- Lewis Dave, Linda Hogan, David Filip, Wall PJ (2020) Global challenges in the standardization of ethics for trustworthy AI. Journal of ICT Standardization 8(2): 123-150.
- Mittelstadt Brent Daniel, Luciano Floridi (2016) The ethics of big data: current and foreseeable issues in biomedical contexts. The ethics of biomedical big data: 303–341.
- Fernández-Alemán José Luis, Inmaculada Carrión Señor, Pedro Ángel Oliver Lozoya, Ambrosio Toval et al. (2013) Security and privacy in electronic health records: A systematic literature review. Journal of biomedical informatics 46(3): 541-562.
- Williamson Steven M, Victor Prybutok (2024) Balancing privacy and progress: a review of privacy challenges, systemic oversight, and patient perceptions in AI-driven healthcare. Applied Sciences 14(2): 675.
- These models could provide patients with transparent information about how their data is used and by whom, offering an opportunity to withdraw consent if desired
- This approach not only enhances patient autonomy but also aligns with ethical standards for confidentiality and respect for persons
- Hanna Matthew, Liron Pantanowitz, Brian Jackson, Octavia Palmer, Shyam Visweswaran et al. (2024) Ethical and Bias Considerations in Artificial Intelligence (AI)/Machine Learning. Modern Pathology: 100686.
- Olteanu Alexandra, Carlos Castillo, Fernando Diaz, Emre Kiciman (2019) Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in big data 2: 13.
- Kearns Michael, Aaron Roth (2019) The ethical algorithm: The science of socially aware algorithm design. Oxford University Press, USA.
- Farahani Milad, Ghazal Ghasemi (2024) Artificial intelligence and inequality: Challenges and opportunities. Int. J. Innov. Educ 9: 78-99.
- Shneiderman, Ben (2020) Bridging the gap between ethics and practice: guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Transactions on Interactive Intelligent Systems (TiiS) 10(4): 1-31.
- Dwivedi Yogesh K, Laurie Hughes, Elvira Ismagilova, Gert Aarts, Crispin Coombs, et al. (2021) Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International journal of information management 57: 101994.
- Coglianese Cary, David Lehr (2016) Regulating by robot: Administrative decision making in the machine-learning era. Geo. LJ 105: 1147.
- Shneiderman, Ben (2020) Bridging the gap between ethics and practice: guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Transactions on Interactive Intelligent Systems (TiiS) 10(4): 1-31.
- Kumar Alok, Utsav Upadhyay (2024) Ethical Implications in AI-Based Health Care Decision Making: A Critical Analysis. AI in Precision Oncology 1(5): 246-255.
- Nizamullah FNU, et al. (2024) Ethical and Legal Challenges in AI-Driven Healthcare: Patient Privacy, Data Security, Legal Framework, and Compliance.
- Wahab Nor, Aida Binti Abdul, Rahman Bin Mohd Nor (2023) Challenges and Strategies in Data Management and Governance for AI-Based Healthcare Models: Balancing Innovation and Ethical Responsibilities. AI, IoT and the Fourth Industrial Revolution Review 13(12): 24-32.
- Díaz-Rodríguez Natalia, Javier Del Ser, Mark Coeckelbergh, Marcos López de Prado, Enrique Herrera-Viedma, et al. (2023) Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Information Fusion 99: 101896.
- Kyrtsidou, Agapi Chryssi (2024) Navigating the digital disruption: agile strategies for regulated global industries in the era of rapid technological changes in international business.
- Cohen I Glenn (2019) Informed consent and medical artificial intelligence: What to tell the patient?. Geo. LJ 108: 1425.
- Felzmann, Heike, Eduard Fosch Villaronga, Aurelia Tamò-Larrieux, Christoph Lutz (2019) Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society 6(1): 2053951719860542.
- Angel Sanne, Kirsten Norup Frederiksen (2015) Challenges in achieving patient participation: a review of how patient participation is addressed in empirical studies. International journal of nursing studies 52(9): 1525-1538.

















