Anthropic has struck a new partnership with HealthEx to link electronic health records to Claude, its artificial intelligence assistant. The move could let patients ask questions about their care with the help of their own medical data. The companies did not share a launch date or terms, but the plan signals a push to bring AI into everyday patient decisions.
The companies say the goal is simple: make health information easier to use. Patients could ask Claude follow-up questions after visits, review lab results, or prepare for appointments. The effort comes as health systems look for ways to reduce confusion and improve access.
Background and Context
Patient portals have expanded over the past decade, but many people still find them hard to navigate. Studies have found low portal use among some groups, including older adults and those without regular internet access. At the same time, interest in AI assistants has surged in healthcare settings, from triage chatbots to tools that draft clinical notes.
Anthropic’s Claude is designed for safe, helpful responses. HealthEx, a healthcare platform, would act as the bridge to electronic health records, or EHRs. The pairing aims to reduce friction by bringing data and questions into one place.
“Anthropic is partnering with HealthEx to let patients use their electronic health records when asking Claude for medical or health advice.”
How It Could Work
Under the partnership, patients would grant permission for Claude to access parts of their record. That could include medications, allergies, recent lab results, care plans, or visit notes. Claude would then tailor replies to the individual’s information and the question asked.
- Explain test results in plain language.
- Summarize care instructions after a visit.
- Flag potential conflicts with listed medications.
- Prepare questions for a patient’s next appointment.
Clear guardrails will be key. The assistant will need to avoid making diagnoses or replacing clinician judgment. It should steer users to urgent care when symptoms suggest an emergency.
Privacy and Safety Questions
Any tool that touches protected health information must meet strict rules. In the United States, HIPAA governs how data is handled, stored, and shared. The companies will need business associate agreements, audit controls, and clear logs of access.
Security experts also warn about “data drift,” where information used to answer a question is cached or reused. Strong data handling policies are necessary to prevent leaks. Patients will need a simple way to revoke access and see what data was used.
Safety is another concern. Even careful AI systems can make errors or give incomplete answers. Transparency about limitations and source citations can help. Clear on-screen guidance should tell users that the tool is not a clinician.
Benefits and Risks for Patients and Clinicians
If it works as intended, this approach could reduce confusion after visits. Many patients leave appointments with questions about dosing, side effects, or follow-up steps. An assistant that knows their record could help them act sooner and avoid mistakes.
Clinicians could benefit if the tool cuts routine messages. Short, accurate explanations of test results may prevent long email threads and missed calls. However, if answers are unclear or raise new worries, message volume could rise.
Bias is a known issue in AI systems. Responses must not vary unfairly by age, race, gender, or language. Regular testing and public reporting would help build trust. Human review channels will matter when the stakes are high.
What to Watch Next
Key details will decide adoption. Patients will ask how consent works and what data is shared. Hospitals will ask about safety records, cost, and integration with existing EHR vendors. Regulators may look for evidence that the tool improves outcomes without new risks.
Clear metrics can guide the rollout: fewer medication errors, faster follow-up on abnormal results, shorter wait times for answers, and higher patient satisfaction. External audits could verify safety and privacy claims.
The promise is direct: help people understand their own care using the data they already have. The challenge is to do it safely, privately, and without adding burden to clinicians. If Anthropic and HealthEx can show progress on those points, patient-facing AI may move from pilot to routine use.






