My first experience with the use of Artificial Intelligence (AI) in nursing practice was seven years ago when I was working as a nursing informaticist for an academic medical center. I was assigned a project to pilot a Machine Learning (a type of AI) predictive model that was designed to identify patients at higher risk for developing sepsis. When I was assigned the project, I did not understand how Machine Learning worked and had to seek out information to learn the terminology and understand the general premises to effectively integrate this tool into the nursing workflows and train nurses to interpret the model outputs and detect errors. As I was concurrently pursuing my PhD, because of this experience, I completely pivoted my research focus to AI integration in nursing practice, and this remains my program of research today. Even though I am immersed in this topic daily, it is a rapidly evolving field and I still have so much to learn. I appreciate this topic on Artificial Intelligence by OJIN that provides readers with a variety of perspectives on the use of AI in healthcare. Understanding the complexities of AI, including the potential harms and benefits, will guide nurses to create a future with AI in healthcare that augments nursing practice and minimizes risks to ourselves and our patients.
Through the various perspectives curated in this OJIN topic, the reader can explore a key takeaway: that AI is not good or bad but rather has significant potential to help us tackle some problems in health care if it is designed and implemented thoughtfully and with great care. Humans and healthcare professionals must determine for what purpose and in what contexts the use of AI is appropriate and consider when AI use may cause more harm than benefits, including cases where it might disproportionally harm specific groups. Nurses are well positioned as patient advocates and through our core ethical values to consider the impacts of AI use holistically. While health technology companies may primarily value efficiency and cost savings, we as care workers can broaden our evaluation of the potential impacts of the use of AI in healthcare. For example, we might examine the potential impacts on clinician well-being, clinician-patient relationships, and the environmental impacts of this energy-hungry technology. The most recent Code of Ethics for Nurses authored by the American Nurses Association ([ANA], 2025) has several provisions that address the ethical use of AI in nursing practice that can help guide nurses as we embark on this new era in healthcare. The final statement of Provision 7.5 sums up our responsibilities as nurses when evaluating a novel technology: “By critically questioning the underlying assumptions of these innovations, nurses may affirm that they reflect the values, principles, and goals of the profession.” (ANA, 2025, para. 4). My hope is that through the articles in this OJIN topic on AI, readers will be a few steps closer to being able to critically examine the use of AI in nursing practice.
Through the readings, readers should be able to come away with some key learnings. First, there is not an established definition of what AI is, as you will see through the varied definitions presented by the authors. AI is an umbrella term, like the word “food,” and its types and flavors can vary widely. The type of AI will influence how it works and whether it is a good fit to solve the problem it is designed to address. Readers will be provided with an overview of a variety of types of AI and potential applications through the articles in this topic. While nurses should not be expected to take computer science courses to practice nursing, they do have an ethical responsibility to have a level of AI literacy that enables them to understand the basics of how AI works, how to detect errors, and when an AI tool is (or is not) a good fit for a problem. We can think of AI as a new team member, and like all the other team members that we work with, we need to have a general understanding about how our team members are trained and how this training confines the tasks they can effectively and safely carry out. And just like our human team members, AI team members make mistakes. Our vigilance determines whether those mistakes reach our patients.
Like many people, I often oscillate between excitement about the potential for AI to benefit humanity, while at other times I am fearful of ways it may be misused or cause harm. An example of a use case in nursing that I find inspiring is The COmmunicating Narrative Concerns Entered by RNs (CONCERN) early warning system (EWS), which leverages real-time nursing documentation and Machine Learning to identify deterioration risk and has significantly improved patient outcomes (Rosetti et al., 2025). Another example is the 2024 American Innovation Award for ThriveLink, an AI telephonic innovation created by nurses and social workers that enables people to apply for social services by simply answering questions on their phone rather than struggling to fill out complex forms (ThriveLink, n.d.). These are just two of many examples of nurse innovators leveraging AI for good.
On the flip side, there are uses of AI that cause me great concern. For example, I recently attended a health IT conference and there were vendors demoing “AI nurses” for uses such as post-operative follow-up calls with patients. Telehealth nurses carry out holistic assessments, they not only hear a patient’s words, but also their tone and affect as well as background noises that can provide insights about the home environment. A patient may say, “I’m fine,” but it is clear to a nurse when they are not. This is incredibly nuanced care that the nursing profession should not be delegating to AI. In response to the emergence of these “AI nurses,” a nurse in the state of Oregon succeeded in getting legislation passed that a nurse needs to be human (Walkner, 2025). I did not think I would live to see the day when we needed to write that into law, but here we are. These are just a few examples to contemplate as we, as a profession, determine where AI might augment the nursing process, and where it may have no place.
I encourage OJIN readers to explore the contributions to this OJIN topic on Artificial Intelligence. The authors represent a variety of perspectives on AI in healthcare and their diverse perspectives illustrate the mufti-faceted nature of this topic. I urge all nurses to make their own determination about when and where AI may or may not augment their practice and find a way to actively contribute to the design and/or governance of AI where they work and live. We need to collectively decide as a profession the future we want to live in and what role AI will have in that future.
The article Advancing Nursing Practice Through Artificial Intelligence: Unlocking Its Transformative Impact, by Dr. Jennifer Shepherd and Dr. Amy McCarthy, provides essential definitions of key terms related to AI. The authors describe some of the potential uses of AI in nursing, such as how AI-driven remote monitoring could extend access to timely nursing care. Key challenges of using and implementing AI in health care such as AI literacy, user acceptance, regulatory and policy constraints and concerns about data quality and availability, are also explored. The authors offer helpful recommendations for tactics that may support successful integration alongside some common pitfalls. They also outline an exciting list of emerging roles for nurses in AI.
In the article Digital Defense Toolkit: Protecting Ourselves from Artificial Intelligence-Related Harms, Dr. Rae Walker challenges readers to consider the fact that many problems in healthcare result from structural and systemic vulnerabilities related to lack of investments by healthcare organizations and governments. As such, AI may not get to the root of these problems. The author challenges us to consider that whether AI is beneficial or harmful depends on the application, contact, and whose perspective is used to frame what is a benefit or a harm. They assert that nurses need to advocate for consensual negotiation prior to AI integration into their practice rather than having these tools imposed upon them. The author applies the concept of digital defense to outline strategies that nurses, and other care workers, can use to protect themselves from the ways AI could adversely impact personal vitality, opportunity, connectedness, contribution and inspiration. Finally, Dr. Walker provides a summary of resources for nurse engagement and advocacy to support care workers psychosocial well-being and to defend against potential harms of AI, such as privacy infringement.
In their brief article Applying Artificial Intelligence to Electronic Health Record Data to Advance Symptom Phenotyping: A Brief Practical Guide, Dr. Melissa Pinto and Dr. Jerrold Jackson introduce strategies to leverage AI to better understand and effectively treat patient symptoms. They highlight nurses’ intuition and ability to notice subtle changes in a patient's condition and pose the question whether this same type of insight could be captured using AI to identify symptom phenotypes with the aim of fostering more effective and timely interventions. The authors also provide a guide for nurses looking to collaborate with data scientists and lead the design and deployment of AI-driven tools that improve symptom management. This guide includes questions related to data availability and security, patient safety, costs and AI tool validation to support nurses in interrogating AI tools.
In the article Artificial Intelligence in Nursing Practice: Decisional Support, Clinical Integration, and Future Directions, Dr. Garry Brydges describes the potential applications of AI in nursing practice such as personalization of patient education, ambient documentation tools and robotics. The author highlights additional industry examples of AI applications, including citations from vendor blogs that allow the reader to get a pulse on the type of applications being developed by health information technology companies. Dr. Brydges stresses the need for interprofessional teams that include clinicians, data scientists, bioethicists and policy makers to successfully shape the future of AI in healthcare. For nurses to effectively contribute to these interprofessional teams, the author calls for comprehensive nurse training in AI so that nurses can lead this digital transformation that will have large impacts on nursing practice.
In the study Artificial Intelligence and Images Portraying Nurses Through the Decades Dr. Janet Reed and colleagues aimed to examine AI-generated images of nurses. Given that the portrayal of nurses in images may impact the public perceptions of nursing, it is imperative that these images are systematically examined. The image outputs generated provide a poignant example of how the presence of biases in the data sets used to train AI can result in the propagation of stereotypes, in this case stereotypical perceptions of nurses. The authors share image outputs for three AI image generators, demonstrating how guardrails and data sets vary and how this influences the AI outputs. The results of this study provide important insights on biases about nurses that could be further propagated by AI. As media outlets and even nurse educators and researchers reach toward generative AI tools for images of nurses for publications and presentations, what is the risk of propagating embedded stereotypes and biases that are captured in training data sets? Is there a way for nurses to “infuse” more realistic and representative images into publicly available data sets?
In the article An Ethics of Artificial Intelligence for Nursing, Dr. Jess Dillard-Wright and Dr. Jamie Smith discuss AI as a dynamic technology that will be shaped by human values, priorities and collective action. They situate their discussion of AI in healthcare within the broader political economy and how, in many countries, this results in a propensity to prioritize production rather than care. The authors provide insights on how costs, faith in technology (sometimes misplaced) and the centering of our United States healthcare system on “sick” care may all influence the AI tools we chose but may not get to the true roots of our problems. The authors also outline potential risks of AI across macro-level (i.e., environmental impacts, bias propagation), meso-level (i.e., nurse distress when there is misalignment between AI and nursing judgement, patient autonomy when data privacy is not upheld) and micro-level (i.e., risk of care depersonalization with technology focus on speed and efficiency in contrast to nursing's more nuanced and relational assessments). These authors call for distributed justice so that the benefits and burdens do not fall disproportionally on groups with less power. They also advocate for open-source tools to foster equity and democratize access for both nurses and patients and under-resourced health systems.
The journal editors invite you to share your response to this OJIN topic addressing Artificial Intelligence in Nursing and Healthcare either by writing a Letter to the Editor or by submitting a manuscript which will further the discussion of this topic which has been initiated by these introductory articles.
Author
Ann Wieben, PhD, MS, BSN, RN, NI-BC
Email: wieben@wisc.edu
ORCID ID: https://orcid.org/0000-0001-9275-0439
Ann Wieben is a nurse scientist and Clinical Assistant Professor at the University of Wisconsin-Madison. She is board certified in nursing informatics and an engaged member of the American Medical Informatics Association and the Nursing Knowledge Big Data Science initiative. Dr. Wieben uses mixed methods to examine multiple facets of the design, integration, and adoption of decision support systems into healthcare practice, with a current research focus on tools driven by Artificial Intelligence. Dr. Wieben is also interested in research related to the use of Artificial Intelligence in nursing education programs and the impact of these tools on the development of nurses’ critical thinking skills and cognitive load. Dr. Wieben is an Emerging Leader with the Alliance for Nursing Informatics and has built a knowledgebase to foster AI literacy among nurses that can be found at https://ai4nurses.wiscweb.wisc.edu/
References
American Nurses Association. (2025). Code of ethics for nurses. https://codeofethics.ana.org/home
Rossetti, S. C., Dykes, P. C., Knaplund, C., Cho, S., Withall, J., Lowenthal, G., ... & Cato, K. D. (2025). Real-time surveillance system for patient deterioration: a pragmatic cluster-randomized controlled trial. Nature Medicine, 1-8. https://doi.org/10.1038/s41591-025-03609-7
ThriveLink. (n.d.). https://www.mythrivelink.com/
Walkner, A. (2025, March 5). Bill banning AI from using the title “Nurse” is approved in OR – Only humans are nurses. https://nurse.org/news/nurse-ai-ban-law/