NJHCQI
  • Who We Are

    Who We Are

    • About
    • Membership
  • Our Work

    Policy

    • Maternity Action Plan (MAP)
    • Integrated Care for Kids (InCK)
    • The Maternal Infant Health (MIH) Hub

    Quality Improvement

    • Shared Decision Making
    • The Leapfrog Group
    • Quality Briefings

    Community Health

    • Mayors Wellness Campaign
    • Conversation of Your Life (COYL)
    • Mental Health Initiatives
  • Media
  • Resources
  • Events

Media Center

Navigating the Promise and Peril of AI in Health Care

Posted September 30, 2025

From clinical decision-making to how insurers manage care and patients seek medical advice, AI is transforming every corner of the health care system. The challenge is to ensure these tools enhance — not endanger — patient safety, trust, and health outcomes.

Already, clinicians are using AI to interpret imaging, detect early signs of disease, and streamline administrative work. For example, AI-driven radiology tools can flag abnormalities faster than a human eye, allowing earlier intervention. Predictive algorithms help identify the patients who are at high risk for readmission or complications. And natural language processing tools are turning doctor-patient conversations into structured records, saving clinicians time and potentially improving accuracy.

But AI is only as good as the data it learns from. If training data reflects historic bias, algorithms can perpetuate disparities. A risk model that underestimates illness severity in certain populations or makes faulty assumptions can worsen inequities — not solve them.

Meanwhile, health insurers are deploying AI to predict utilization, detect fraud, and manage prior authorization. These applications can improve efficiency — but they also raise red flags. When an algorithm denies or delays care, who is accountable? Patients often have no insight into how these systems make decisions. Already, relatively few patients appeal denials. Transparency in how AI models are built, tested, and monitored must become standard practice.

Regulators are starting to take notice. The Centers for Medicare & Medicaid Services (CMS) has issued guidance clarifying that AI cannot replace medical judgment in coverage decisions. That’s a step forward — but more oversight will be needed as insurers increasingly rely on automated systems.

And patients today are using AI tools — like symptom checkers and chatbots — to seek health information or mental health support. These tools can empower people to ask better questions, understand lab results, and manage chronic conditions. Yet they can also spread misinformation or miss serious diagnoses. Patients who enter their health care information into an AI tool need clear disclosure that these tools are not clinicians, and that personal health data entered into them may not be protected by HIPAA.

As AI’s footprint in health care grows, so must our guardrails. We need:

  • Transparency: Patients and clinicians should know when AI is being used and how it informs decisions.
  • Accountability: Regulators must ensure that when AI tools make recommendations or automate actions, responsibility for patient outcomes remains with licensed professionals and institutions.
  • Bias Mitigation: Developers and health organizations must test algorithms across diverse populations and publicly report performance.
  • Data Protection: Patient information used to train AI should be de-identified, secured, and never sold without informed consent.
  • Ethical Oversight: AI in health care should align with our core principles of working to advance safety, quality, and patient-centered care.

AI holds immense potential to improve care quality, reduce administrative burden, and empower patients. But without strong safeguards, it risks eroding trust and widening disparities. As leaders in New Jersey’s health care community, we have a responsibility to guide how these tools are adopted — thoughtfully, transparently, and always with the patient’s best interest at heart. I welcome your thoughts on how to ensure AI is used responsibly in health care.

Categories

Schwimmer Script Blog
Previous PostQuality Institute Leading Statewide Mapping of Children’s Mental Health Access and Systems of Care  
  • Connect With Us

    Facebook Twitter Linkedin Instagram YouTube
  • FOR ALL QI INQUIRIES PLEASE CALL: 609-452-5980

    FOR PRESS INQUIRIES ONLY CALL: Carol Ann Campbell

    973-567-1901
    cacampbell@njhcqi.org

  • New Jersey Health Care Quality Institute

    P.O. Box 2246
    Princeton, NJ 08543

    Phone: 609-452-5980

    • Who We Are
      • About
      • Membership
    • Our Work
    • Policy
      • Maternity Action Plan (MAP)
      • Integrated Care for Kids (InCK)
      • The Maternal Infant Health (MIH) Hub
    • Quality Improvement
      • Shared Decision Making
      • The Leapfrog Group
      • Quality Briefings
    • Community Health
      • Mayor Wellness Campaign
      • Conversation of Your Life (COYL)
      • Mental Health Initiatives
    • Media Center
      •  
    • Resources
      •  
    • Events
  • © 2024 NJHCQI
    Website by Mosaic