Your doctor is probably using AI, even if they haven’t told you about it.
Subscribe to read this story ad-free
Get unlimited access to ad-free articles and exclusive content.
Over the past two years, medical providers across America have quietly embraced a new AI tool called OpenEvidence to help them make clinical decisions, brush up on medical knowledge and even prepare for their licensing exams. The service, a sort of chatbot for doctors, was used by about 65% of U.S. doctors across almost 27 million clinical encounters in April alone, the company told NBC News.
“Everyone is using it,” said Dr. Anupam Jena, an internal medicine physician at Massachusetts General Hospital in Boston and a professor of healthcare policy at Harvard. “Its growth really has been exponential.”
NBC News spoke with over two dozen doctors, hospital administrators, medical students and healthcare researchers from Hawaii to Maine to explore the rise of OpenEvidence. Each individual said they either used it regularly themselves or knew someone who did.
Almost two-thirds of physicians — or roughly 650,000 doctors — in the U.S. actively use OpenEvidence, while another 1.2 million use it internationally, OpenEvidence representatives said. With its quick and tailored replies, OpenEvidence has become an AI-era equivalent of consulting a colleague for their expert opinion, though the software can also write patient discharge notes and provide custom study tools for doctors’ medical exams.
“Sixty percent of all the searches are about how to make clinical decisions,” said Jena, who is currently examining 90 million OpenEvidence queries submitted since 2024 as part of a new research project. “The physicians are asking: For this particular patient, or with this profile, this condition, maybe other comorbidities that they have, what’s the right treatment?”
Yet with OpenEvidence’s skyrocketing popularity, some experts worry about potential hallucinations or incomplete answers, a lack of rigorous scientific studies on the tool’s patient impact, and the potential for doctors’ critical thinking and evaluation skills to erode with increased OpenEvidence use and dependence.
But many in the medical world see OpenEvidence as a time-saving tool that can improve patient care.
Do you have a story to share about AI in medicine or other professions? Contact reporter Jared Perlo.
“The vast majority of our physicians are familiar with it, depending on their area of specialty,” said Dr. Jeremy Cauwels, a hospitalist and the chief medical officer for the Sanford Health system based in Sioux Falls, South Dakota.
“OpenEvidence is one of those tools that’s remarkably easy to adopt,” said Cauwels, who oversees more than 2,500 healthcare providers in the country’s largest rural healthcare system. “It’s freely available, it’s very functional on your phone, and it’s one of those things that can help you answer questions more quickly than you would be able to by any other method.”
Doctors across specialties, states and clinic sizes echoed the sentiment.
For example, a junior doctor at a New Hampshire hospital said that when he saw a patient’s potassium value plummet, he checked OpenEvidence to make sure it was a normal side effect of a medication and not a new emergency. After searching through peer-reviewed medical publications, OpenEvidence said it was a common side effect and provided several options to restore normal potassium levels.
Meanwhile, across the country, a doctor at the local Indian Health Service medical center in rural Pine Ridge, South Dakota was not convinced a patient’s spine was fractured after looking at specks on an X-ray. He vaguely recalled from his medical school training that a different type of scan might be needed to make a firm diagnosis, so he asked OpenEvidence whether an X-ray would suffice. OpenEvidence said that a CT scan was preferred to confirm that type of fracture and provided several links to papers with more information. Both doctors requested anonymity because their employers had not authorized them to speak to the press.
OpenEvidence is clear that its services should be used to supplement — not replace — doctors’ judgment. “While we hope you find the Services useful to you as a healthcare professional,” its terms of service say, “they are in no way intended to serve as a diagnostic service or platform, to provide certainty with respect to a diagnosis, to recommend a particular product or therapy or to otherwise substitute for the clinical judgment of a qualified healthcare professional.”
OpenEvidence also says it complies with HIPAA, the federal health privacy law, through a series of privacy protocols and protections. In April of last year, the company said that “U.S. covered entities can securely input protected health information (PHI) in accordance with HIPAA’s privacy and security standards.” However, some health systems are not satisfied with the system’s overall privacy safeguards. For example, MaineHealth currently asks its doctors to refrain from entering PHI into OpenEvidence.
Some of the doctors reached by NBC News said that they and their colleagues used the platform on their personal devices, including information such as a patient’s age, sex and previous medical history in their queries but refraining from entering names or other personal identifiers.
At its core, OpenEvidence is an AI-powered medical search engine that combs through vast databases of healthcare research to provide suggestions about clinical decisions or medication options and help highlight the latest evidence from a variety of medical fields.
OpenEvidence’s landing page calls it “America’s Official Medical Knowledge Platform” and presents users with a search bar, which suggests entering queries like “alternatives if metformin causes diarrhea,” or “what are the latest advancements in gene therapy for Duchenne muscular dystrophy?”
To answer user questions, the system generates a summary of the relevant medical research, providing links to the peer-reviewed articles or medical guidelines that informed the answer.
Healthcare providers can access the tool through its website or via a stand-alone mobile app. Practitioners must sign up for an account and provide their unique healthcare ID number supplied by the U.S. government. Once registered, the providers can ask unlimited medical questions — for free.
“Our commitment is that core OpenEvidence will always be free for users,” CEO Daniel Nadler said in an interview. The service is currently funded by ads — some from pharmaceutical and medical device companies — on its app and website, though many of the clinicians interviewed for this article remarked that the ads were either unobtrusive or nonexistent.
OpenEvidence is part of a growing cottage industry of AI-powered medical tools, from AI scribes that record and transcribe doctors’ speech during patient appointments to OpenEvidence competitors like Doximity or iatroX that aim to consolidate and share clinical knowledge. A recent survey by the American Medical Association found that over 80% of physician respondents currently use some form of AI. Nadler and the OpenEvidence team aim to expand the service’s AI notetaking, billing and visit integration functions in the coming years.
Yet in an industry where technological change is often forced on hesitant doctors by medical administrators, few services have seen such rapid adoption as OpenEvidence.
“We did the hardest thing in the history of American health care,” Nadler said. “We got the majority of American doctors to all voluntarily adopt a single technology platform.”
America’s financiers are impressed. The startup raised $700 million in less than a year and is backed by the glitziest names in the venture capital business: Sequoia Capital, Google Ventures, Nvidia, Andreessen Horowitz, Thrive Capital and more. Valued at $1 billion in early 2025, OpenEvidence has surged to a valuation of $12 billion in just over a year.
Dr. Paul Sax, an infectious disease specialist at Boston’s Brigham and Women’s Hospital, said OpenEvidence often “borders on miraculous,” with its AI-supported search providing customized answers that other tools previously considered to be gold-standard cannot currently match.
For years, Sax said, clinicians have turned to a medical reference website called UpToDate to get recommendations for proper treatment and clinical decision-making. However, UpToDate consists of long-form, peer-reviewed summaries of the latest research, tricky to search for doctors with targeted questions about specific scenarios.
“UpToDate has for so long been the dominant place where clinicians wanted to look things up at point of care,” Sax told NBC News. “It became the 10,000-pound gorilla that you would always look at things on UpToDate.”
“But OpenEvidence’s search feature is far more flexible,” said Sax, who was approached by OpenEvidence to serve as an outside infectious disease expert after he wrote positively about the company. “The process of searching for answers is frictionless.”
“That is the power of large language models, since you don’t need to search for specific terms. You search with the actual question,” Sax added, saying it’s helped him save time previously spent leafing through diffuse resources.
UpToDate is now rushing to implement its own AI tool, called Expert AI. Company spokesperson Suzanne Moran said “about 2,000 hospitals and health systems have signed up for Expert AI” as of April 30. “We believe healthcare AI must prioritize patient safety, transparency, and freedom from advertising,” Moran added.
One kidney specialist, who requested anonymity to discuss their use of a tool that had not been explicitly approved by their hospital system, said that OpenEvidence regularly saved them 30 minutes of fruitless searching through older systems — including UpToDate. He, like many other doctors, said OpenEvidence was particularly helpful in obtaining clinical information or treatment suggestions regarding symptoms or conditions outside of their everyday expertise.
Jena, the doctor and internal medicine professor at Harvard, said this pattern was also reflected in the data.
“If you’re a surgeon, you know how to do all the surgery stuff. But if you see someone and their blood pressure or their heart rate is a little bit high, you might not be sure whether you can stop a medication that is designed to keep their blood pressure or heart rate lower,” Jena said. “We see that physicians are using OpenEvidence to try to answer some of those questions that they have to deal with in their clinical practice that aren’t specific to the areas they were trained in.”
He recounted his own recent frustration when searching — with traditional tools — for how to tailor antibiotics for a patient whose spleen had been removed years ago.
“I kept searching and just trying to figure out a variety of different ways to search for the answer to this question, and it didn’t come up,” Jena said. “But it came up in OpenEvidence. It referred me to this New England Journal of Medicine [NEJM] paper from 2014, which, for the life of me, I could not find on Google.”
This rapid reference to top-tier medical research is OpenEvidence’s special sauce, according to both doctors and the OpenEvidence team. The company has inked partnerships and licensing agreements with the world’s most prestigious medical journals, like NEJM and the Journal of the American Medical Association.
In April, leading AI company OpenAI launched its own version of OpenEvidence called ChatGPT for Clinicians, but the service does not currently license the same top-tier medical information accessible to OpenEvidence users. ChatGPT for Clinicians says it references “trusted medical sources, including millions of peer-reviewed studies.”
Dr. Eric Rubin, an infectious disease clinician and the editor-in-chief of NEJM, said the relationship between his journal and OpenEvidence was deeply symbiotic and a win for patients.
“I’m in the business of getting information out to people,” Rubin said. “We’re trying to publish the most important information there is, that’s really relevant for patient care. And in order for it to be delivered, we have to get to the doctors wherever they are.”
“So if they’re going to be using a tool like OpenEvidence, then I’d like my material to be on that platform,” Rubin said, noting that NEJM also has similar licensing agreements with other tools.
OpenEvidence has also struck agreements with specialized medical organizations like the National Comprehensive Cancer Network and the American Diabetes Association to provide the latest and most relevant treatment guidelines.
By ensuring that high-quality data fuels OpenEvidence’s operations, the team says it avoids the hallucinations or incorrect answers that plague many other AI systems. “We think of AI as search glue,” OpenEvidence’s Nadler said. “We have access to all of [our partners’] full text, to all of their figures. We don’t need the AI to generate answers.”
Some doctors reached by NBC News said they read each reference supplied by OpenEvidence to ensure the system correctly summarized the underlying research, though most said they only do so when they get an unexpected result.
“I usually click through to the referenced papers,” said Dr. Kassel Galaty, an emergency medicine physician in Portland, Oregon. “It depends on whether what it says is surprising to me. If it says something I would have done anyway, then I might not.”
“But if it says something that I hadn’t considered or sounds weird, then I’ll click through to whatever article it references and try to figure out what the paper says,” Galaty added.
Most of the doctors reached by NBC News said they were surprised by what they believe to be OpenEvidence’s high level of accuracy, especially compared to off-the-shelf AI tools that trawl the entire internet for answers.
However, some healthcare providers were quick to point out that OpenEvidence occasionally flubbed or exaggerated its answers, particularly on rare conditions or in “edge” cases.
Several doctors noted that OpenEvidence sometimes drew overly strong conclusions from medical studies with small sample sizes, though other clinicians noted even the system’s mistakes tended to err on the side of caution.
For example, emergency physician Dr. John Rozehnal, based in New York City, said that OpenEvidence mistakenly suggested injecting a particular medicine might damage a patient’s liver, even though the risk of liver damage was very low and much more likely to be caused by the patient’s heavy drinking. Rozehnal told NBC News several weeks later that OpenEvidence had improved its answer and now accurately reflected the role of the patient’s drinking.
“In my opinion, OpenEvidence has really improved in the way it has delivered results and searched the medical literature over the past couple years, and we see a lot of providers and trainees using it,” said Dr. Hannah Galvin, a pediatrician and the chief health information officer at the Cambridge Health Alliance in Massachusetts. However, “we have concerns over how it is being used to deliver clinical care.”
Few studies have rigorously examined how OpenEvidence affects patient outcomes, largely due to how recently the tool has exploded in popularity. While OpenEvidence highlights that it scored 100% on the official United States Medical Licensing Examination, an academic study released in December found that OpenEvidence accurately answered more complex medical questions less than 45% of the time. That study has not yet been peer reviewed.
Several researchers, including Galvin, are aiming to fill this evidence gap and better understand how OpenEvidence is changing clinical care.
“We want to make sure that we have explored and understood how these tools are performing for our population and that they are making decisions in a fair, valid, equitable and safe manner,” Galvin said.
Galvin’s research will examine how early-career doctors use OpenEvidence and compare its answers to those from several general-purpose chatbots, like OpenAI’s ChatGPT or Google’s Gemini, using the same prompts. “We’re excited to complete our data collection, write up our work and share it with our community,” Galvin said.
Many doctors are confident that their clinical training — much of it based on years of learning to decipher which symptoms or medical details are most relevant while ignoring less-important or misleading signals — will allow them to parse OpenEvidence’s answers for the most valuable information.
“I know the right questions to ask OpenEvidence, and then I have to sort of pair whatever response that I get from OpenEvidence with my clinical experience and intuition,” said Dr. Cornelius James, an internal medicine specialist and professor at the University of Michigan. “I don’t feel concerned about patient safety, because for me, I feel enough of that need to check and double check, trust-but-verify mentality.”
While more experienced doctors might have the ability to fall back on years of clinical expertise, some worry that reliance on such a medical tool might lead to dependence and misplaced confidence among medical students and junior doctors. NBC News spoke with several medical students, all of whom used OpenEvidence to help them study and prepare to discuss cases with their teachers.
One midcareer doctor in Missouri, who requested anonymity given the limited number of providers in their medical field in the country, said he was already seeing the detrimental effects of OpenEvidence on students’ ability to sort signals from noise.
“My worry is that when we introduce a new tool, any kind of tool that is doing part of your skills that you had trained up for a while beforehand, you start losing those skills pretty quickly,” the doctor said. “This is something that we are now introducing to students from the get-go.”
“That’s pretty scary, because there’s not a way to teach people — or there is a way, but we haven’t introduced it into many curriculums yet — about how to use these tools, how to be safe about using these tools. And part of that is these tools are just expanding so quickly in scope and ability,” the doctor said.
“I can’t think of a single time where we’ve had something like OpenEvidence this rapidly changing in terms of availability, in terms of the effect that it has had on doctors’ scope of practice.”
Despite these fears, many clinicians point out that doctors have always had to adopt new tools. Putting more and better medical information in their hands could significantly improve their ability to care for patients, they said.
Dr. Girish Nadkarni, a nephrologist and the head of AI at Mount Sinai Health System in New York City, agreed, arguing that doctors like using tools that help them better care for their patients. If doctors are using AI tools, then it is better to work with providers in the open to discuss ethical use than to ignore the tools’ popularity.
“There’s this whole growing area of shadow AI, in which the health systems or institutions will focus on the tip of the iceberg. But there’s a whole part of the system below the ice, where physicians and clinicians and other people just use tools on their personal computers,” Nadkarni said.
In March, Mount Sinai, which employs 47,000 people, announced a new enterprise partnership with OpenEvidence to directly link to the service from the hospital system’s main electronic health record portal for use by doctors, nurses and pharmacists. “I think it’s time to bring tools like OpenEvidence to the surface.”



