← Blog/HIPAA

HIPAA and ChatGPT: What Healthcare Teams Must Know in 2026

May 8, 2026·10 min read·Sekurely Research

Healthcare organizations are under enormous pressure to adopt AI — but most LLM deployments in clinical settings violate HIPAA before the first patient interaction is logged.

The Core Problem

HIPAA's Privacy Rule (45 CFR §164.502) and Security Rule (45 CFR §164.312) apply to any system that creates, receives, maintains, or transmits Protected Health Information (PHI). When a clinician pastes patient data into ChatGPT, that data is transmitted to OpenAI's servers — making OpenAI a Business Associate under HIPAA.

Without a signed Business Associate Agreement (BAA) between your organization and OpenAI, this is a HIPAA violation. Full stop.

What Counts as PHI in AI Prompts?

PHI includes 18 HIPAA identifiers. In AI workflows, the most commonly exposed are:

  • Patient names
  • Dates (admission, discharge, date of birth)
  • Phone numbers and email addresses
  • Social Security Numbers
  • Medical record numbers
  • Account numbers
  • Geographic data (zip codes, addresses)
  • Device identifiers
  • Biometric identifiers
  • Any of these appearing in an LLM prompt — even incidentally — triggers HIPAA obligations.

    What Healthcare Teams Must Do

    Step 1: Get a BAA — OpenAI offers a BAA for ChatGPT Enterprise and API customers. This is the minimum requirement. Without it, no clinical use is permissible.

    Step 2: Implement PII scanning — Before any text reaches an LLM, scan it for PHI using automated detection. Strip or mask identifiers before they enter the prompt.

    Step 3: Audit AI outputs — LLM responses may inadvertently surface PHI from training data or retrieved context. Scan outputs before they reach clinical staff or patients.

    Step 4: Document your AI use — HIPAA requires organizations to maintain policies and procedures for all systems that handle PHI. Your AI governance documentation must include LLM usage policies.

    Step 5: Train staff — Clinical staff need to understand that pasting patient information into unauthorized AI tools is a reportable breach — not a productivity shortcut.

    Conclusion

    HIPAA compliance in AI is not optional and it is not complicated — it requires a BAA, PHI scanning, output auditing, and staff training. Organizations that skip these steps face breach notifications, OCR investigations, and significant fines.

    Protect Your AI Systems Today

    Scan for PII, detect prompt injection, and enforce compliance — free to try, no signup needed.