From clinical decision-making to how insurers manage care and patients seek medical advice, AI is transforming every corner of the health care system. The challenge is to ensure these tools enhance — not endanger — patient safety, trust, and health outcomes.
Already, clinicians are using AI to interpret imaging, detect early signs of disease, and streamline administrative work. For example, AI-driven radiology tools can flag abnormalities faster than a human eye, allowing earlier intervention. Predictive algorithms help identify the patients who are at high risk for readmission or complications. And natural language processing tools are turning doctor-patient conversations into structured records, saving clinicians time and potentially improving accuracy.
But AI is only as good as the data it learns from. If training data reflects historic bias, algorithms can perpetuate disparities. A risk model that underestimates illness severity in certain populations or makes faulty assumptions can worsen inequities — not solve them.
Meanwhile, health insurers are deploying AI to predict utilization, detect fraud, and manage prior authorization. These applications can improve efficiency — but they also raise red flags. When an algorithm denies or delays care, who is accountable? Patients often have no insight into how these systems make decisions. Already, relatively few patients appeal denials. Transparency in how AI models are built, tested, and monitored must become standard practice.
Regulators are starting to take notice. The Centers for Medicare & Medicaid Services (CMS) has issued guidance clarifying that AI cannot replace medical judgment in coverage decisions. That’s a step forward — but more oversight will be needed as insurers increasingly rely on automated systems.
And patients today are using AI tools — like symptom checkers and chatbots — to seek health information or mental health support. These tools can empower people to ask better questions, understand lab results, and manage chronic conditions. Yet they can also spread misinformation or miss serious diagnoses. Patients who enter their health care information into an AI tool need clear disclosure that these tools are not clinicians, and that personal health data entered into them may not be protected by HIPAA.
As AI’s footprint in health care grows, so must our guardrails. We need:
- Transparency: Patients and clinicians should know when AI is being used and how it informs decisions.
- Accountability: Regulators must ensure that when AI tools make recommendations or automate actions, responsibility for patient outcomes remains with licensed professionals and institutions.
- Bias Mitigation: Developers and health organizations must test algorithms across diverse populations and publicly report performance.
- Data Protection: Patient information used to train AI should be de-identified, secured, and never sold without informed consent.
- Ethical Oversight: AI in health care should align with our core principles of working to advance safety, quality, and patient-centered care.
AI holds immense potential to improve care quality, reduce administrative burden, and empower patients. But without strong safeguards, it risks eroding trust and widening disparities. As leaders in New Jersey’s health care community, we have a responsibility to guide how these tools are adopted — thoughtfully, transparently, and always with the patient’s best interest at heart. I welcome your thoughts on how to ensure AI is used responsibly in health care.