AI usage by physicians nearly doubled in 2024, according to a recent American Medical Association (AMA) survey of providers. Incorporating “augmented care” improves outcomes for patients, but heightens the exposure of hospitals, medical practices, and physicians to compliance pitfalls, particularly under HIPAA’s privacy and security rules.

AI-Driven health care tools and their impact on patient privacy

Widely used health care AI technologies can be organized into three primary categories, each presenting new compliance challenges:

  • Clinical decision support systems (CDSS) assist health care professionals in diagnosing conditions, recommending treatments and improving clinical workflows. CDSS synthesizes patient data and clinical guidelines to offer personalized treatment recommendations.
  • Diagnostic imaging tools analyze radiological scans, pathology slides, and cancer screenings to detect anomalies and identify early indicators of disease.
  • Administrative automation manages tasks such as summarizing clinical notes, drafting discharge instructions and supporting patient-facing services like chatbots.

All three systems depend on ingesting and processing protected health information (PHI). With every convenience they create, they introduce classic HIPAA hazards like improper disclosure and secondary data use, making them attractive targets for cybercriminals seeking names, addresses and Social Security numbers. The physicians surveyed by the AMA rank these privacy risks ahead of any other reported AI concern, underscoring the need for rigorous data governance and security controls.

The year 2024 was a pivotal for health care technology, with AI usage increasing alongside record-breaking data breaches. The largest health care breach in history, disclosed by Change Healthcare, Inc. that February, affected 190 million individuals. Another breach exposing the records of 483,000 patients across six hospitals originated with an agentic AI workflow vendor. The breach left sensitive patient information available for weeks on the platform unprotected by authorization controls.

Applying HIPAA Requirements to AI governance

Given these privacy and security challenges, regulatory frameworks like HIPAA play a crucial role in setting guardrails for AI usage. Providers adopting AI should understand how the law applies to the technologies improving their practices, how to avoid committing a violation of the law, and to avoid risk to their patients and businesses.

HIPAA’s privacy rule governs the use and disclosure of PHI and requires varying levels of authorization. A covered entity:

  • Must share PHI if a patient requests their information or HHS is performing a compliance investigation. Audits by the Office for Civil Rights (OCR) may include extensive requests for information about AI systems. This includes inventories of tools, contractual agreements with vendors, and logs of activity with AI.
  • May share PHI when the information is disclosed for treatment, payment, or operations (known as the TPO exception to written authorization). Tools that use AI for such purposes—like CDSS and diagnostic imaging tools—must only provide TPO functions. If the TPO provider uses PHI for other purposes, such use and the disclosure may violate HIPAA.
  • May share PHI only with written authorization for any other use of health data, including for marketing and advertising or for product development. Training AI models by inputting PHI therefore requires express consent from the patient before their data can be reused for these purposes.

Besides authorization requirements, covered entities must maintain business associate agreements (BAAs) with vendors that receive, maintain, or transmit health information. These agreements must meet certain statutory requirements, several of which are critical to AI usage contracts: BAAs must describe the permitted and required uses of PHI and must limit business associates from using patient data for other purposes. The agreement must require the vendor to maintain safeguards around the information and establish a timeline for notifying the covered entity of unlawful exposure or a data breach.

Security requirements for patient information are also outlined in HIPAA. The security rule requires covered entities and business associates to safeguard the confidentiality, integrity, and availability of patient information by:

  • Identifying and protecting against reasonable threats;
  • Preventing unlawful disclosures of patient information; and
  • Ensuring employee compliance with the law.

Business associates must abide by strict security protocols to share PHI without violating the law. Data breaches, like those described above, threaten an efficient and safe healthcare environment. Data breaches expose information, and may render entire IT systems unavailable for use, decreasing the availability and quality of care by causing delays in appointments and provider availability. Lack of access to PHI puts the safety of patients at risk when critical information for treatment is inaccessible.

Strategies for Maintaining Compliance

With these regulatory challenges in mind, providers and businesses should adopt several strategies to maintain compliance with the law while taking advantage of the benefits of artificial intelligence.

Vendor Selection 

Vendor selection is the first, and often the strongest, line of defense against HIPAA violations. Careful selection of third-party vendors and software is one of the most effective ways to protect against security breaches and misuse of patient information. The most critical provision when contracting with AI providers is a prohibition on the use of patient data for training or retraining models without patient authorization. This proactive approach to vendor management significantly reduces both legal liability and the risk of regulatory enforcement actions. Covered entities should require industry-standard cybersecurity practices, like alignment with the NIST SP 800-66 Rev. 2 framework or similarly strong protocols. Additionally, requiring a short breach notification time in the event of an incident will ensure the provider can limit lateral movement throughout their network and minimize disruptions to medical care and the confidentiality, integrity, and availability of patient information.

Employee Training

Secondary to vendor management is employees’ compliance with the law. Employees must be trained on how artificial intelligence presents new threats to patient security. “Shadow IT”—where employees use or download software without the approval of their organization—poses a particular threat with AI. If an employee inputs PHI into non-HIPAA-compliant software or without a BAA in place, the information cannot be recouped and may be integrated into the model or exposed in a cyber incident. Incorporating approved, HIPAA-compliant tools can lessen the occurrences of unsanctioned software use. Employees should be required to maintain multi-factor authentication on their profiles, accounts and user interfaces with approved software. When breaches occur, layered security protocols limit the information exposed.

As AI usage continues to accelerate, healthcare organizations must balance the transformative potential of these technologies with the obligation to protect patient privacy and maintain regulatory compliance.

The path forward for safe use of AI in healthcare begins with rigorous vendor selection and extends through comprehensive employee training and ongoing governance. Organizations that invest in proper AI governance frameworks now will be positioned to benefit from emerging technologies while avoiding the substantial legal, financial, and reputational risks associated with HIPAA violations. Success in this evolving landscape demands treating AI compliance as the foundation for sustainable and secure advancement in patient care.