AI in Healthcare: A Practical Starting Guide for Leaders and Clinicians
- 3 days ago
- 12 min read
Updated: 2 days ago
By Dr. Ernest Wayde, PhD, MIS

Artificial Intelligence (AI) has arrived in healthcare faster than most organizations expected. Now, the pressure to “do something with AI in healthcare” is real. Board members are asking strategic questions that leadership teams are not yet equipped to answer. Vendors are pitching new AI tools to executives. Clinicians are finding AI-assisted tools appearing in their workflows, often with little explanation or training. There is little discussion about whether AI tools should be used or what boundaries are needed to guide their use. So where should leaders and clinicians actually begin?
This post is for both the leader trying to build an organizational strategy and the clinician trying to understand what AI means for their practice. The goal is not to sell you on AI or warn you away from it. Instead, it is meant to give you a starting point: what AI is, what it is doing in healthcare today, and where real benefits and risks exist so you can make informed decisions.
What Is Artificial Intelligence in Healthcare?
To start with, the term “AI” has been used interchangeably to describe everything from a chatbot that answers scheduling questions to a clinical algorithm that detects cancer in a medical image. That range matters, because the risks, the oversight requirements, and the appropriate use cases are completely different depending on what you are talking about.
What does AI actually do?
AI refers to computer systems that perform tasks traditionally associated with human intelligence, but this does not mean they possess human intelligence. This includes things like recognizing patterns, making predictions, understanding language, and generating content. What distinguishes modern AI from conventional software is that these systems are not explicitly programmed with rules. Instead, they learn from large amounts of data, identifying patterns that allow them to make predictions or decisions on new inputs they have never seen before.
How LLMs are built: The most common current reference to AI is when it is used to describe generative AI tools and applications like ChatGPT. These tools can generate text, images, audio and more, and they are powered by what are called Large Language Models, or LLMs. Understanding how LLMs are built and how they generate a response is foundational to understanding the benefits and limitations of AI and the implications in healthcare use.
LLMs are trained on enormous amounts of text, including billions of web pages, books, articles, research papers, and more. The model does not read that material the way a human does. It learns statistical patterns: which words tend to appear near which other words, in which contexts. Everything about how the model behaves, what it gets right, what it gets wrong, and what biases it carries, traces back to that training data and the choices made about what to include and how to weigh it. Bias in AI is not added later. It is baked in from the beginning. Watch a brief video here that explains this in more detail.

How they generate a response: When you type a question and hit send, the model is not retrieving a stored answer or consulting an external source. It is predicting, one word at a time, what the most statistically likely next word is, given everything in front of it. It repeats that process until the response is complete. This is why these tools can produce language that sounds authoritative, well-structured, and clinically precise, even when the content is incomplete or simply wrong. Watch a video that explains this with examples here.
That is the most important thing to understand about these systems: fluency is not the same as accuracy. The model generates language that sounds right, not language that is right. It has no concept of truth and cannot recognize the limits of its own knowledge. However, that won’t stop it. It will keep generating. Which means a confident-sounding response tells you almost nothing about whether that response is correct.
This is not a reason to avoid these tools. It is a reason to understand them before you use them, especially in a clinical setting where the stakes of a wrong answer are not abstract. AI models are not trying to give you the truth, so it is critical that you validate their responses before acting on them.
What AI Is Doing in Healthcare Right Now
Now that we understand what AI actually does and how it works, let’s look at how that is impacting healthcare today.
Reducing Administrative Burden
Administrative burden is one of the biggest contributors to clinician burnout and one of the most significant barriers to efficient, high-quality patient care. Research from the American Medical Association shows that for every eight hours of scheduled patient time, physicians are spending close to six of those hours on the electronic health record [1]. Primary care physicians, on average, spend more time on EHR documentation per visit than they do with the patient [2]. And that does not even include the hours many physicians spend finishing notes and tasks at home after their patients have left. I can attest to this from personal experience. My wife is a physician, and she spends a great deal of time at home finishing notes and completing tasks after a full day of seeing patients.
This is the area where AI is currently having the most consistent and well-documented impact in healthcare. Ambient AI scribes are tools that listen to a patient encounter and automatically generate a draft clinical note for the clinician to review and approve. These tools are widely becoming one of the most widespread and consistently successful application of AI in healthcare today [3].
A multicenter study published in JAMA Network Open followed physicians and advanced practice providers across six U.S. health systems. After just 30 days of using an ambient AI scribe, clinician burnout rates dropped significantly, with meaningful improvements in cognitive load and focused attention on patients as well [4]. A randomized trial published in NEJM AI found that providers using ambient AI scribes reclaimed a substantial amount of documentation time each day [5].

The result is more time with patients and less time on paperwork, which is what most clinicians went into healthcare to do in the first place.
One important note: AI-generated notes still require the clinician to review and approve them before final submission to the record. A clinician using an AI scribe tool is still fully responsible for the quality and accuracy of every note submitted. These tools are designed to support clinical work, not replace clinical judgment.
Supporting Diagnostics and Early Detection
Remember how we described what AI actually does: it learns statistical patterns from large amounts of data and applies those patterns to new inputs. Medical imaging is one of the best natural fits for that capability that exists in healthcare. Compared to many clinical tasks, it is largely data-driven. A model trained on millions of scans can learn to recognize patterns that are easy to overlook, even for experienced clinicians reviewing high volumes of images under time pressure.
These tools, trained on large libraries of scans, can flag anomalies that might otherwise be missed, including early-stage tumors, signs of glaucoma, and indicators of lung disease, before routing them to a clinician for review.
It is important to note that AI does not outperform clinicians at present but rather supports them to improve their performance. A 2024 meta-analysis published in npj Digital Medicine reviewed 36 studies and found that human and AI working together produced better diagnostic results than either one working independently, while also meaningfully reducing the time clinicians spent reviewing images [6]. The technology surfaces what is worth a closer look. The clinician decides what to do about it.
AI in diagnostics is not a replacement for clinical judgment. It is a tool that helps make sure less gets missed.
These are the two areas where the evidence is strongest and most consistent. AI is also showing early promise in other areas, including predictive risk modeling, patient flow management, and prior authorization, though the research in those areas is still developing. The goal here is not to catalog everything AI is doing in healthcare, but to focus on areas where the evidence is strongest to date.

The Risks Clinicians Need to Understand
The applications above are real and they are useful. But understanding what they do well is only part of the picture. Knowing where they fall short matters just as much.
To balance the two areas where AI is working well, there are at least two things AI does not do well that every clinician needs to understand.
The first is communicate uncertainty. When an AI tool gives you an output, it typically does not tell you how confident it is, or under what conditions it tends to be wrong. A wrong answer looks exactly the same as a right one. There is no hesitation, no flag, no signal that anything is off. For a clinician, that is a significant problem. You cannot calibrate your trust in a tool that never tells you when to doubt it.
The second is transparency. Many AI models cannot explain how they arrived at a particular output. You can see what they produced but not why. For administrative tasks, that may be manageable. For anything that touches a clinical decision, a tool that cannot be interrogated is a tool that cannot be held accountable.
It is also worth noting that embedded bias in training data is a real and documented issue across clinical AI tools. Models trained on data that does not reflect your patient population may perform less accurately for those patients [7].
The inability to communicate uncertainty and lack of transparency matter because they create a direct and documented risk for clinicians in practice. When clinicians know an AI tool is involved, there is a well-established tendency to defer to its output, even when something about that output does not look right. This is called automation bias, and it has been documented in clinical settings. A commentary published in JAMA found that even in controlled settings, without the usual time pressures of clinical practice, clinicians favored AI-based recommendations and continued to defer to them even when the information was contradictory or clinically questionable [8]. In real-world clinical environments, where time pressure is constant and cognitive load is high, that risk is likely greater, not smaller.
A related concern is accountability. When an AI tool is wrong and a patient is harmed, the question of who is responsible is not straightforward. The clinician who approved the output, the organization that deployed the tool, and the vendor who built it may all have a role. What is clear is that the clinician remains professionally responsible for every clinical decision, regardless of whether AI was involved. Using an AI tool does not transfer that responsibility. It adds a layer of complexity to it.
How to Start Using AI in Healthcare
With a clearer picture of what AI is doing in healthcare and the risks that come with it, the natural next question is where to start if you want to use AI in your clinical practice. The sequence matters more than most people expect. One of the most common mistakes I see is organizations jumping straight into AI adoption before anyone on the team has a real feel for what AI actually is. Here is the sequence that makes sense.

Step 1: Get Comfortable With AI Personally First
Before you start using a single clinical AI tool, spend time with AI outside of work. Use ChatGPT, Claude, Microsoft Copilot, Google Gemini, or any general-purpose AI assistant to help you draft an email, summarize something you just read, find recipes or answer a question you are curious about. You don’t need to become an expert. You just need enough firsthand experience to develop a feel for what these tools do well and where they can go wrong. Get a sense for the benefits and risks associated with using them. This step gets skipped more than any other, and it shows. Leaders who haven’t personally used AI are not well positioned to evaluate vendor claims or ask the right questions. Clinicians who have no personal experience with AI have little sense of how these tools behave and where they tend to go wrong. Personal experience is the foundation. Everything else builds on it.
Step 2: Build Your Understanding Deliberately
Once you have some personal experience with AI, invest time in understanding it more deeply. You don’t need a technical background. You need enough knowledge to participate meaningfully in decisions that will increasingly affect your organization and your patients. The APA and AMA have both published materials on AI in clinical practice worth reviewing. Peer-reviewed journals like JAMA and NEJM are also covering clinical AI in increasingly accessible terms. You don’t need to read everything. You just need enough context to ask good questions, recognize when something deserves a closer look and consult experts when needed.
Step 3: Build a Governance Plan Before You Use AI
Personal familiarity and a working understanding of AI are important. But before AI enters your organization in any professional capacity, there needs to be a governance plan in place. The governance plan must define how AI tools are evaluated, approved, used, monitored, and when necessary, discontinued. This matters more than most people realize. What may happen is that one clinician starts using an AI tool independently, then a few more do, and before anyone has made a deliberate decision, the tool is woven into workflows across the organization. By the time leadership engages, there is no evaluation process, no oversight structure, and no clear accountability. Governance ends up being built after the fact, when it should have been the starting point.
For organizational leaders and executives, this is your responsibility. A governance plan doesn’t need to be elaborate to start, but it does need to exist before AI use begins, not after adoption is already underway. For clinicians and practitioners who are not in a position to build that plan themselves, your role is just as important. Push for it. Understand what governance means and why it matters and make the case to the leaders in your organization that it needs to be in place before AI tools are used in any clinical or operational context. You are closest to the patient. That gives you standing to ask hard questions about how AI tools in your workflow were evaluated, what oversight exists, and who is accountable when something goes wrong.
If your organization is trying to figure out where to start with AI governance, that’s the work we do at Wayde AI.
A Final Thought
AI will change healthcare. That is not a prediction anymore. It is already happening in clinical settings across the country. The question isn’t whether your organization or your practice will engage with it, but how deliberately you approach that engagement. The leaders and clinicians who navigate this well won’t necessarily be the ones who moved fastest. They will be the ones who asked the right questions early, built the right oversight processes, and kept patient welfare at the center of every decision. That is not a complicated standard. But it does require intention. Start there.
Staying informed doesn’t have to mean hours of reading. The Wayde AI Brief is a short weekly intelligence brief for healthcare and mental health leaders navigating real-world AI adoption, governance, and risk. Subscribe for free.
Frequently Asked Questions
How do I get started with AI in healthcare?
ChatGPT, Claude, Microsoft Copilot, and Google Gemini are all free or low-cost and require no technical background. Just start using one for everyday tasks. Summarize an article. Draft a note. Ask it something you are genuinely curious about. You will quickly get a feel for what these tools are good at and where they fall short, which is exactly what you need before you encounter AI in a clinical or organizational setting. Make sure not to enter any private or confidential information into these free versions.
What are the best resources for AI literacy in healthcare?
The APA and AMA have both published materials on AI in clinical practice worth reviewing. Peer-reviewed journals like JAMA and NEJM are covering clinical AI in increasingly accessible terms. Google’s AI Essentials course and Microsoft’s AI literacy resources are also solid starting points that don’t require any technical background. Look for resources aimed at practitioners, not developers.
Is AI safe to use in behavioral health?
It depends on the application and the context. AI-assisted documentation carries a very different risk profile than AI-assisted clinical assessment. The more directly a tool influences a clinical decision, the more rigor you need in evaluating and overseeing it.
Do I need an AI governance plan before implementing AI in healthcare?
Yes. Before AI enters your organization in any professional capacity, a governance plan needs to be in place. The organizations that run into the most trouble are the ones that allow individual use to quietly become organizational use before any deliberate decision has been made. Start with a specific, contained problem, establish clear oversight, and build your AI governance structure before you use it.
About the Author
Dr. Ernest Wayde is the Founder and Principal of Wayde AI, a healthcare AI ethics consulting firm. He holds a doctorate in Clinical and Cognitive Psychology from the University of Alabama and a master’s in Information Systems from Wright State University, with advanced certifications from MIT Sloan and the Microsoft Academy. He works with clinical and administrative leaders on ethical, responsible, and compliant AI adoption in healthcare and behavioral health.
References
[1] Holmgren, A. J., & colleagues. (2024). National comparison of ambulatory physician electronic health record use across specialties. Journal of General Internal Medicine. https://link.springer.com/article/10.1007/s11606-024-08930-4
[2] Rotenstein, L. S., & colleagues. (2023). System-level factors and time spent on electronic health records by primary care physicians. JAMA Network Open. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2812258
[3] Resnick, K., & colleagues. (2026). Ambient AI scribes: What is the return on investment?. JAMA Network Open. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2843526
[4] Olson, K. D., & colleagues. (2025). Use of ambient AI scribes to reduce administrative burden and professional burnout. JAMA Network Open. https://pmc.ncbi.nlm.nih.gov/articles/PMC12492056/
[5] Lukac, P. J., & colleagues. (2025). Ambient AI scribes in clinical practice: A randomized trial. NEJM AI. https://ai.nejm.org/doi/abs/10.1056/AIoa2501000
[6] Kwan, J., & colleagues. (2024). Impact of human and artificial intelligence collaboration on workload reduction in medical image interpretation. npj Digital Medicine. https://www.nature.com/articles/s41746-024-01328-w
[7] Siddique, S. M., & colleagues. (2024). The impact of health care algorithms on racial and ethnic disparities: A systematic review. Annals of Internal Medicine. https://www.acpjournals.org/doi/10.7326/M23-2960
[8] Khera, R., Simon, M. A., & Ross, J. S. (2023). Automation bias and assistive AI: Risk of harm from AI-driven clinical decision support. JAMA, 330(23), 2255–2257. https://pubmed.ncbi.nlm.nih.gov/38112824/


Comments