New evidence suggests that distrust of AI in medicine is rising even as hospitals, clinics, and doctors adopt the technology at a rapid pace. A STAT analysis published on March 23, 2026 argues that AI is landing in a health system where trust was already weak, and that the speed of deployment may be making that problem worse.
That concern is showing up in survey data. A February 2025 study cited by STAT found that 66% of Americans reported low trust in their health care system to use AI responsibly. The same study found that only 58% believed their health system would make sure an AI tool would not harm them.
At the same time, AI use among physicians is climbing fast. The American Medical Association said in March 2026 that 81% of doctors now use AI professionally, more than double the share reported in 2023. That contrast is striking: clinicians are moving forward quickly, while many patients remain uneasy.
Patients Want Oversight, Not Blind Automation
Recent research suggests that patients do not outright reject medical AI. Instead, they respond strongly to the safeguards around it. A JAMA Network Open study found that trust and acceptance improved when people were told that clinicians remained involved, oversight mechanisms existed, and the system’s performance and data quality were explained clearly.
That point matters because health care is different from many other AI settings. Patients are not just judging convenience. They are judging whether a system is safe, fair, and accountable in situations that may affect diagnosis, treatment, or survival. STAT argued that AI can deepen distrust when institutions introduce it as a cost-saving or efficiency tool before patients feel confident it is being used in their interest.
The wider public mood also remains cautious. Pew Research Center reported in March 2026 that Americans remain wary about AI overall, even as exposure to it grows across everyday life. That broader caution likely shapes how people react when AI enters the exam room or the patient portal.
Error Risks and Privacy Fears Add to the Strain
Part of the trust problem is technical. A health-related AI product that provides incomplete or incorrect guidance can do more than just annoy users. It can change decisions about when to seek care. STAT pointed to a February study that found ChatGPT Health had a 50% error rate in emergency test cases, including instances in which it incorrectly advised delaying care.
Privacy concerns are also hard to ignore. On the same day as STAT’s trust piece, the outlet reported on a court filing regarding patient record access and the alleged misuse of provider identities in record-sharing systems. Although that case is separate from AI itself, it reinforces a broader fear that digital health tools may expose sensitive information in ways patients cannot control.
Together, these concerns help explain why distrust of AI in medicine is becoming a health policy issue, not just a tech story. Patients may accept AI more readily when doctors stay involved, systems are transparent, and institutions prove that safety comes before efficiency. Without that, faster AI adoption may continue to outpace public trust.

