Anthropic announced a new suite of healthcare and life sciences features on Sunday, including Claude Health privacy, which allows users of its Claude artificial intelligence platform to share their health records to understand their medical information better. The launch comes just days after rival Open introduced ChatGPT Health, announcing a broader push by major AI companies into healthcare, a field seen as both an opportunity and a sensitive testing ground for generative AI technology.  

Both tools will allow users to share information from health records and fitness apps, including Apple’s Health app, to personalize health-related conversations. The expansion occurs during increased scrutiny regarding whether AI systems can safely interpret medical information and avoid offering harmful guidance.  

Claude’s new health records features are now available in beta for Pro and Max users in the U.S., while integrations with Apple Health and Android Health Connect are rolling out in beta for Pro and Max plan subscribers in the U.S. This week, users must join a waitlist to access OpenAI’s ChatGPT health tool.  

Eric Kauderer-Abrams, head of life sciences at Anthropic, one of the world’s largest AI companies and rumored to be valued at $350B, said Sunday’s announcement represents a step toward using AI to help people manage complicated healthcare issues.  

When navigating through health systems and health situations, you often have this feeling that you are alone and that you are trying your time together. All this data from all these sources, stuff about your health and your medical records, and you’re on the phone all the time, he told NBC News. I am really excited about getting to the world where Claude can take care of all that.  

With the new Claude for healthcare functions, you can integrate all your personal information with your medical and insurance records, and have Claude as the one orchestrator, able to navigate the whole thing and simplify it for you, Kauderer-Abrams said.  

When unveiling ChatGPT Health last week, OpenAI said hundreds of millions of people ask wellness or health-related questions on ChatGPT every week. The company stressed that ChatGPT Health is not intended for diagnosis or treatment but is instead meant to help users navigate everyday questions and understand developments over time, not just moments of illness.  

AI tools like ChatGPT and Claude can help users understand complex medical reports and double-check doctors’ decisions. And for billions of people who lack access to vital medical care, summarize and integrate information that would otherwise be inaccessible.  

Like OpenAI, Anthropic emphasized patient data privacy in its blog post accompanying Sundays launch. The company said that the Opt-in model details shared with Claude are excluded from the model’s memory and not used to train future systems. In addition, users can disconnect or edit permissions at any time, Anthropic said. Anthropic also announced new tools for healthcare providers and expanded its Claude for life science offerings to improve research. Anthropic said its platform now includes HIPAA-compliant AI-ready infrastructure, referring to the federal law governing medical privacy, and can connect to federal healthcare coverage databases, the official registry of medical providers, and other services that will ease physician and health provider workloads.  

These new features could automate tasks such as preparing prior authorization requests for specialist care and supporting insurance appeals by matching clinical guidelines to patient records.  

Dhruv Parthaarathy, Chief Technology Officer at Commure (which creates AI solutions for medical documentation), said in a statement that Claude’s features will help Commure save clinicians millions of hours annually and return their focus to patient care.  

The rollout comes after months of increased scrutiny of AI chatbots’ role in dispensing mental health and medical advice. Character AI and Google agreed to settle a lawsuit alleging their AI tools contributed to worsening mental health among teenagers who died by suicide.  

Anthropic, OpenAI, and other leading AI companies caution that their privacy statements can contain mistakes and should not be substitutes for professional judgment.  

Anthropic’s acceptable use policy requires that a qualified professional review the content or decision before deployment or finalization when CLAUD is used for health care decisions, medical diagnosis, patient care, therapy, mental health, or other mental guidance. The data access controls are robust, and for many people, they can save you 90% of the time you spend on something. Anthropic’s Kauderer Adams said, however, that for critical use cases when every single detail is essential, you should still absolutely check the information. We are not claiming that you can completely remove the human from the loop. We see it as a tool to amplify what the human experts can do.

Amazon

Leave a Reply

Your email address will not be published. Required fields are marked *