Raleigh, NC

32°F
Clear Sky Humidity: 55%
Wind: 2.54 M/S

Anthropic Introduces Claude AI for Healthcare, Enabling Secure Access to Health Records

Anthropic Introduces Claude AI for Healthcare, Enabling Secure Access to Health Records

Anthropic has introduced a new set of healthcare-focused capabilities for its Claude AI platform, positioning the company among the latest AI providers to expand into personal health data analysis.

The initiative, branded Claude for Healthcare, enables U.S. subscribers on the Claude Pro and Max plans to securely connect their medical records and laboratory results to the platform. Users can authorize access through integrations with HealthEx and Function, while support

for Apple Health and Android Health Connect is scheduled to launch later this week through the company’s mobile applications.

Once connected, Claude can generate summaries of a user’s medical history, translate clinical test results into plain language, identify trends across health and fitness metrics, and assist in preparing questions for medical appointments. Anthropic stated that the goal is to improve patient engagement during clinical interactions and help individuals better understand and manage their health information.

The announcement closely follows OpenAI’s recent launch of ChatGPT Health, which provides a dedicated environment for securely linking medical records and wellness applications to deliver personalized insights, laboratory interpretations, and nutrition-related guidance.

Anthropic emphasized that the new integrations are privacy-focused by design. Users maintain full control over the data shared with Claude and can modify or revoke access at any time. The company also confirmed that health data connected through the platform is not used for model training.

The expansion comes amid increasing scrutiny over the reliability of AI-generated health information. Recent actions by Google to remove inaccurate AI health summaries have highlighted concerns about potential harm. Both Anthropic and OpenAI have reiterated that their systems are fallible and should not be treated as replacements for professional medical advice.

According to Anthropic’s Acceptable Use Policy, outputs related to healthcare decisions, medical diagnoses, patient care, mental health, or treatment must be reviewed by a qualified professional before use in high-risk scenarios. Anthropic added that Claude is designed to include appropriate disclaimers, acknowledge uncertainty, and guide users toward consulting licensed healthcare providers for personalized medical guidance.

Found this article interesting? Follow us on X(Twitter) ,Threads and FaceBook to read more exclusive content we post. 

Image

Cybersecurity Insight delivers timely updates on global cybersecurity developments, including recent system breaches, cyber-attacks, advancements in artificial intelligence (AI), and emerging technology innovations. Our goal is to keep viewers well-informed about the latest trends in technology and system security, and how these changes impact our lives and the broader ecosystem

Please fill the required field.