top of page

AI and Privacy in Healthcare

A tablet displaying a medical chart, EKG, and vital signs, with a stethoscope sitting on top of it. The caption: "AI and Privacy in Healthcare"

As with any major technological advancement, there is potential for wonderful and enriching use cases, and a myriad of ethically sticky situations.

As Artificial Intelligence is rapidly expanding into various aspects of our lives, let's pause and take a look at how AI is infiltrating your healthcare. We'll also look at how to best reap the benefits, while protecting yourself from its pitfalls. Let's examine the use cases first, and we'll cover the risks and benefits along the way.


Where you might be seeing AI in 2025:


Marketing Materials

The ability to generate images and write text with AI has rapidly enabled healthcare providers to create brochures, social media posts, and blog posts (though not this one!), without the need of marketing departments, graphic designers, and heavy time investments.

That being said, patients should be aware that AI makes mistakes, and sometimes people are too reliant on the AI to produce high quality content. If a blog post seems repetitive, or is overly general, the odds it was written by AI are higher. While providers should be checking their work before publishing, some are not as careful as they should be when they are pressed for time.

The California AI Transparency Act requires disclosure of content created with AI, but it is only a state law, not a federal or international mandate. As of 2025, it's important to know that the "rules of the game" are still being written for AI. It is likely that many of the rules will be established through litigation, not legislation. As state and federal legislators are trying to learn about this rapidly advancing technology, the world is already adopting it. While the legislators are "playing catch-up", many of the rules will be decided as judges interpret existing laws and attempt to apply them to this novel technology.

AI companies are faced with a challenging dilemma. They can either wait for the legislation to be written, and risk falling behind their competitors, or they can choose the route of "asking for forgiveness instead of permission." Most AI companies are choosing to press ahead into uncharted territories, hoping to come out on top. Businesses are allured into using AI at every turn. While writing this blog post, our website host offers AI blog post writing tools to generate entire posts, design titles, outline the post, and tools to improve grammar and writing style. Although they weren't used for this this post, I have to admit, I did try them out to see what they can do. As a practice, we're committed to quality over quantity. At this point, the AI isn't a better writer than I am, but as the technology develops, that may change.

As a consumer, it is ultimately up to you to protect your own interests. You most likely cannot rely on the government, or the AI companies to do it for you. Transparency is a vital part of our business model, and ultimately inspired us to write this post. We want our patients to be informed so they can make educated decisions about the role AI plays in their healthcare. While AI might save us time, it isn't worth it if it is ultimately at the patient's expense.


Scheduling & Intake Phone Calls

It's no secret that high quality administrative staff are hard to hire and keep in healthcare. They are frequently the lowest paid employees involved in your care. They also often bear the brunt of the responsibility to do unpleasant tasks like informing patients that their insurance is not going to cover their bill, or that they have to wait several weeks for an appointment.

Replacing administrative staff with AI is attractive because after the initial set up costs, the AI costs less than paying an employee. It also has the advantages of being more efficient, as it is able to call multiple patients at once. The AI won't call out sick and doesn't need health/dental insurance. AI won't complain about working early/late hours that the patient prefers a phone call. The AI won't be ruffled if a patient is rude. Once you train the AI how to handle a situation once, it can do it consistently moving forward. This eliminates the issue of patients getting different/conflicting answers from different administrative staff.

In light of the fact that this saves healthcare practices money, it is most likely one of the first changes you'll see of AI being integrated into your healthcare. Don't be surprised if you get a phone call from "someone" saying, "Hi, my name is Avery and I'm an AI assistant for ABC Clinic."

Considering that even if you are talking to an actual person, most companies give the disclaimer that, "This call may be monitored or recorded for quality assurance and training purposes," there is not a huge difference for patients in making this change.

Arguably, there is more room for improvement than risk, considering that the AI will be a forever employee. When there is no turnover, the AI employee just gets better over time, and that ultimately means smoother experiences for patients.


AI Scribes-Your Doctor Isn't Writing Your Notes

If you haven't encountered it yet, you soon will. A provider will offer you a consent form, requesting permission to record your voice during your visit. The request seems innocent enough, but there are several questions we ought to ask before signing consent to have our voices recorded. There are even more questions we should ask about allowing a new and developing technology to write legal documentation about our health.

When it comes to recording your voice, it's important to ask both how the recording is being made, and how it will be used. Scarlett Johansson lent her voice to bring an AI assistant to life in the movie "Her" back in 2013. Fast-forward to 2024, and she found herself demanding answers from OpenAI for why "Sky", a voice model of ChatGPT, sounded eerily similar to her own. Johansson claims she had turned OpenAI down when they asked if they could use her voice. Her lawyers filed suit for OpenAI to reveal the process used to develop the voice to prove its origins. We may never have the answer. OpenAI denied wrongdoing but decided to discontinue using that voice (1).

You could argue that they did so to protect their trade secrets and not reveal how they develop their AI software, because they wanted to avoid bad press and legal fees, or out of genuine desire to respect Johansson. Regardless of which view you take, it seems that in the midst of the AI technology race, there will be a balancing act of asking for forgiveness vs. permission.

While you may not be famous like Scarlett Johansson, it's also important to recognize that if something goes awry, you may not have the financial and legal resources that she does either. Though you may not currently be famous, someday you might be. This question is especially important when considering allowing your children's voices to be recorded. Their futures are yet unwritten, and what seems insignificant now, may not be later in their lives.

If a voice recording is being made, it's important to learn what software is being used, and what the security level of the software is. Recording a voice directly into an electronic medical record software, that is HIPPA compliant, affords more privacy protection than a recording being made using Google Voice.

The second question we should ask is, "How is that recording used?" There are some obvious positives for doctors in decreased documentation time. There are some wins for patients in receiving more full attention from their doctors who aren't so busy writing their notes that they don't even look at the patient when they are talking.

It's important here again to recognize, however, that AI is a developing technology and can make mistakes. Your provider should review the note the AI creates before signing off on it. The question is, will they really do it?

Many a patient has gone to look at their after visit summary, and seen information marked that isn't accurate. This is in part because there are pre-set responses in electronic medical record documentation software that are sometimes carried forward into a note, even if a doctor didn't ask the question. If they forgot to unmark the pre-set, it is dropped into the note.

The AI scribes may afford doctors more time to pay attention to these details and correct them. They may also lure the doctor into a false sense of security that the note is "probably accurate" and they might sign it without looking closely. While this isn't necessarily a worse situation than what already occurs in healthcare documentation, deciphering the causes would be more difficult. Pre-sets in medical records are standardized, and easily identified as a potential area for error in documentation. If the AI makes an error, figuring out the process of how it got there and what went wrong is more difficult.

It's important to ask how your recording will be used, and who retains rights to that data. Will the recording be used to train the AI model? Will the recording be kept and stored as part of your medical record, or will it be deleted? Can you opt in/out of allowing your recording to be used to train the AI? Can you request the recording be deleted? For more information on the scenarios where this matters, see Legal Concerns for AI Recordings (coming soon).

AI & Insurance Claim Submissions

Companies using AI to streamline this process can save both patients and providers a lot of time, trouble, and money. If healthcare providers don't have to pay an authorizations specialist, a billing department, and a claims denial administrator, it will reduce the cost of providing healthcare services to patients. If doctors don't have to deal with writing denial appeal letters, they have more time to care for their patients. This software is attractive for patients and providers.

Health insurers are not incentivized to make the process easy. If a claim goes through, they have to pay it out and they lose money. It's actually in their best interest to make the process difficult. If patients get frustrated and give up, when they just pay the bill, the insurer doesn't have to. If a provider lacks the time and staff and chooses not to fight the denial, they either eat the cost of the services and write it off, or send the bill to the patient, who is likely to be upset and complain.

Even when patients or providers call in advance to find out a patient's benefits, insurers play a recording that says something to the effect of, "If our staff tell you something that conflicts with your policy, the policy is what we'll actually follow." It begs the question, "Why do we even call to get the benefits at all if insurers aren't beholden to what their staff tell us anyways?"

The AI companies can cut through the middlemen and go right to the policy itself. They can then coach providers on which codes are reimbursable and which ones aren't, before the claim is even submitted. They can also go through a provider's past documentation and create a summary with all the information needed to fight the claim denial when it happens. Then the provider just has to review the letter the AI generates, sign, and submit it.

While the benefits are huge for patients and providers, there are always risks to consider. As the AI gains access to your past medical records, how do they use that information?

If the AI does such a better job at getting reimbursement from insurance companies, will they then increase the premiums for subscribers anyways? The answer may be yes, but keep in mind they may be developing their own AI as well. If they can eliminate the need to pay administrative staff to process claims, it may balance out. Fewer administrative costs may not make insurance prices go down, but it might keep them from going up.


Closing Thoughts:

While the use cases for AI in healthcare are likely to grow in the coming years, for now, the major risks patients should consider are:


  1. Misplaced trust that the AI is accurate (by doctors or patients)

  2. Lack of AI transparency to identify when/how it was used

  3. Lack of legislation

  4. Data ownership & privacy concerns

  5. Is your data used to train the AI?


As consumers and businesses alike are caught up in the excitement of all the ways AI can make their lives easier, it's good to take a moment and critically evaluate each situation. It's important to ask questions and be a savvy consumer and ask questions. Once you have all the information, you can weigh the risks and benefits and decide for yourself.


Disclaimer: This article on AI healthcare privacy is for educational purposes and should not be construed as individual legal or medical advice. You are encouraged to seek legal counsel when making legal decisions, and individual medical evaluation by licensed healthcare provider for your individual medical needs.


References:

Comments


If you have questions, please feel free to contact our office. We'll do our best to answer your questions and let you know if we can help with your specific condition.

Thanks for submitting!

© 2025.  All website literary material & blog posts are submitted for copyright protection and may not be used or reproduced without express written consent of Ally Total Physical Therapy

bottom of page