AI in healthcare is the 21st century’s gold rush moment, where anyone with a functioning shovel (i.e. any kind of chatbot, GenAI tool, some pattern-recognizing algorithm or automated software at the bare minimum) is staking out territory to dig. In fact, AI technology in digital health tools has become so all-pervasive that on Monday, Rock Health declared that it is retiring its “AI deal” tracker as a distinct category in its first-quarter investment report published Monday. That’s because the “distinction is blurring” between digital health companies and AI-enabled healthcare startups.
The hope or gamble of these startups and their investors is that AI will make the fragmented health industry a little less broken. And in turn, they will rake in millions. Many of these AI startups will fail, without a doubt, and consolidation is bound to occur. Yet the founder and CEO of Health Universe, a San Francisco-based startup that recently raised a humble $6 million in seed funding led by the venerable venture capital firm — Kleiner Perkins — is unflappable. Dan Caron doesn’t seem to care that many companies have staked their ground pretty firmly, becoming leaders in their area of expertise, be it in ambient AI, in clinical documentation improvement or in prior authorization revenue cycle. Speaking in slow measured tones, Caron emanates an air of thoughtfulness. He is building the infrastructure layer of AI in healthcare, the Palantir of healthcare.
“So there’s a lot of folks that are looking to bring AI into their organizations, and in healthcare, that means that you need a secure, compliant, regulated sandbox in order to make these tools safe and applicable for a healthcare environment,” he said in an interview last week. “And so Health Universe has really built that sandbox. And we, as a company, allow organizations to build and deploy agents inside of the sandbox.”
Health Universe’s customers are mainly academic medical centers as well as individual researchers, and to a smaller extent clinical laboratories, payers and small providers.
What follows is a lightly-edited Q&A where he addresses everything from competition to how AI is making us less skilled.
MedCity News: You talked about a sandbox. There’s so many players in the AI sandbox. There are obviously companies that are building AI agents for different industries that want to take a crack at healthcare. There are companies that want to solve this one particular problem, for example, clinician burnout or a patient recruitment for clinical trials. You are competing with all of these companies and there are some pretty big names in the field. How do you take on companies like Abridge or even bigger competitors of theirs like Epic?
Caron: You see a lot of point solutions that have risen to stardom and fame with a very narrow point solution. And a lot of those high-flying solutions are now searching for another way to create value (maybe a subtle dig at ambient AI companies slowly moving into prior authorization and revenue cycle management to prove ROI and not solely improved provider satisfaction with a new technology). There was a lot of hype. There still is a lot of hype. And at Health Universe, we’ve taken a different approach where we’ve spent the last few years building the capability for an organization to launch many different AI agents and many different AI applications.
The Power Behind Enterprise EHR Software for Large Healthcare Systems
Enterprise EHR boosts scalability, interoperability, and governance for large healthcare systems.
And what that means is that if you think of the long tail of all of the use cases in healthcare, there’s more than just prior auth. There’s more than just specialty prescribing. There are thousands and thousands of workflows in healthcare that are humans marching to a fax machine or doing data entry into some old, rigid system and then sending a homing pigeon or a carrier pigeon to another office down the street and then using Morse code to connect to some other organization.
And those broken workflows are not solvable by narrow point solutions. Every organization faces fragmented systems, legacy systems, and these are the real bottlenecks in healthcare. It’s not that they don’t have a fancy new AI point solution. That fancy new AI point solution isn’t going to fix the entire landscape of health IT, which is often antiquated and dated. And so Health Universe takes the position that we can be the single point of integration where once integrated, agents can then be developed by the system, developed by other vendors, or developed by Health Universe to solve whatever the top priorities are of the organization.
And so we’re not here saying, “You need to adopt this single AI solution because it’s the greatest.” We say, “You know what? We know the landscape is going to change. The foundation models are going to change and upgrade. We know that there’s going to be great models coming from organizations like Harvard, Stanford, UCSF … and across the world even. And how do you integrate those models and applications in a way that is standardized?” And so that’s our hypothesis.
And it’s really proven to be, I think, pretty accurate because a lot of our customers come back and ask for additional agents once they see the flexibility of Health Universe and they say, “Oh, well, we don’t need to integrate this other point solution. We can just spin up a new agent using Health Universe.”
MedCity News: AI, still in many cases, is a little bit of a black box. We don’t know exactly how the machine works in the background. Now OpenEvidence is able to cite their sources when they are answering your question and that’s why it’s very popular among doctors. When you have an AI agent maybe trying to match a patient to a clinical trial, how do we know that it is looking at the correct medical record?
Caron: On Health Universe, we provide sources back to the original medical records so that a clinician can go and verify. In the United States, at least, we’re still very much a human-in-the-loop society where we want to see the sources, we want to see the medical records, and we want to understand to the best of our ability how an agent came to a conclusion. Even though neural networks are not inspectable, you can take the output and you can classify it in terms of its risk and risk stratification is important. And you can use additional tools such as LLM-as-a-Judge or other automated reviewing systems to take a look and see, “Is this high risk? Does this potentially contain any medical toxicity? Are there reasons we should flag this or otherwise alert the clinician?”
We’ve done studies with gold standard and human reviewers, and reviewing the agentic output and also creating their own summaries. And we found in many cases that AI summaries contain important details that the humans missed. And it surprises some folks who have been doing this work for a very long time.
We have to be careful, we have to be responsible, but we also have to acknowledge that humans have deficiencies and biases too. And we have to figure out how to navigate this world in a way that empowers clinicians and empowers humans while taking advantage of the things that AI does well.
MedCity News: Ultimately all the technology that we build will have our own biases. So how do we correct for that? And how is it possible that given these biases, the AI was able to detect things that the humans missed?
Caron: It’s a great question. A lot of that has to do with the source data. If you are aggregating data that has lots of bias, then you’re going to have models that have lots of inherent bias in them. And so you really, the model developers and the model builders should be, if they’re practicing data science, well, be aware of that drift that can happen. And the more that we aggregate sensor input, laboratory inputs and automate the collection of that data, then we can start to move away from human bias because the more purely quantitative we are, I think the less likely it is that sort of human bias influences transformer models in a way that is detrimental.
MedCity News: Can you give me concrete examples of some of the agents that you’ve built and what they’re able to do?
Caron: We just announced a partnership with Duke’s Division of Clinical Research Institute and in collaboration with Duke, we built agents that stand up a new clinical trial and a PI (principal investigator) can generate a short synopsis on their own. And then we have an agent that takes that synopsis and generates a full-blown clinical trial protocol. That is a sophisticated agent that is not a simple one-shot LLM call. That agent goes and looks at other existing trials. It looks at schemas depending on the nature of the trial and builds a fact sheet and then builds that trial in a way that is internally consistent and avoids drift.
From that, we’re able to build all of the downstream documentation, the eCRF (electronic Case Report Form), the ePRO (electronic Patient Reported Outcome). Ultimately, using Health Universe, we can submit that protocol to an IRB on Health Universe that has their own agents that mark up the document from a regulatory perspective. Then the human-in-the-loop can say, “Okay, yeah, you’re right, this is a good point. We need to make sure that this is addressed and this is addressed.”
They can fire that review back over to the PI. And Duke ran this project on Health Universe and showed that they could basically stand up a new clinical trial in seven and a half days, which normally would take six to nine months.
MedCity News: Let’s talk a bit about the business side. You work with academic medical centers, but who else are your customers?
Caron: Academic medical centers make up a large majority of our customer base. Individual researchers use the infrastructure as well to deploy and test their AI models and make them shareable. In terms of customers, we seek to help anyone in healthcare who needs a secure, compliant sandbox for deploying AI tools in healthcare. So sometimes clinical laboratories come to us, payers, small providers.
MedCity News: This is a layman’s question but Nvidia is also trying to build the infrastructure layer for AI in healthcare. How would you distinguish Health Universe from what they’re doing?
Caron: I think Nvidia has done a great job at federated data — that’s a very specific way to build models in a cross-institutional way. But if you need to run those models over patient data easily, Health Universe is a better option because we are more of the application layer, where institutions can bring their own data and their own EHRs and plug into Health Universe workspaces.
We are very much a user experience layer with authorization, authentication, commercialization, and discovery, and even orchestration. So if you have a patient record and you’re sort of wondering, maybe there is an AI tool here that I might want to run. We have a posture and scoliosis tool, which is a computer vision model, which can detect the hip angle and spine angle of a patient and produce a risk score. And so that application, the aggregation of these different applications is something that is a bit different than, say, like a federated data that Nvidia provides.
MedCity News: You are developing an AI agent matching marketplace. In other words, if there are two different organizations, both of which have AI agents, they can talk to each other and exchange what kinds of information?
Caron: We’re getting inundated with faxes in healthcare. And we see a world where agents sending data back and forth and handling all of the pre-work, and then bubbling up decisions or review by humans is really the future.
For instance, you could run an agent in your personal workspace that summarizes your health and enriches what would normally be just a couple of notes or, “Hey, I kind of feel this way.” If an AI is looking at your Whoop data and looking at your lab data and is already doing a lot of the pre-work, and then you are looking at that and saying, “You know what? I think that makes a lot of sense, and I want to put this information in front of my clinician,” and then being able to send that summary off to that clinician.
And they receive that and maybe an agent works it up and says, “You know what? Well, maybe these might be appropriate interventions here and we’re going to stack rank them based upon the patient’s medical records that we have access to.” And so by the time the data gets in front of the clinician, there’s already an intervention that’s been decided, there’s already a summary of the patient, there’s already some flags around, but we should probably check on X, Y, and Z, and really allowing humans to coexist with agents, but agents taking care of all of the upfront work, I think that’s kind of the new paradigm.
MedCity News: Let me ask a philosophical question. What you just described would require the physician to have enough knowledge to say, “OK I will choose option 3 in terms of intervention instead of option 2.” There’s already evidence of de-skilling happening as we rely on AI more and more. What will happen to the future of the medical profession when the years of hands-on learning and knowledge based on lived experience disappears because AI is doing everything.
Caron: It’s a very important question. I love reading books, and from a very young age, I would go to Barnes & Noble and I would sit and stare at all of the computer science books and I would pick one off the shelf and be so fascinated at all of the knowledge that humans have compiled. And I always wanted to be able to understand it all. And so I have a love of knowledge and learning. And yes, I have seen some of the research around humans losing their edge because AI is doing the thinking for us.
We don’t want to collapse the intelligence of human beings. And I think people who are designing AI systems have to think about the implications. And so one potential way would be if you have an interface that is making a recommendation, maybe there is a secondary experience where there’s some active learning that actually happens.
Maybe there’s sort of a multiple-choice question where you are helping to train the human. Maybe there’s a secondary education component to make sure that that clinician is up to speed. And further, I mean, AI systems can look at a clinician’s performance and say, “Well, you know what? We see that you’re prescribing this medication for type two diabetes very frequently, but there are new medications and you’re kind of missing the boat here.”
MedCity News: What you said about the AI agents spinning up a summary of your own patient record and then communicating with the agent in the doctor’s office — you are not planning a consumer application, are you? Should OpenAI and Anthropic watch out?
Caron: We are a platform for the experimentation of agents in healthcare. And if patients decide that there’s value in using a personal workspace on Health Universe and having machine learning models that have been built by developers at leading academic medical centers, and if those are open-sourced or within reach, then yeah, we would certainly help folks out.
OpenAI and Anthropic — they’re clearly leaders in the space, but I think if you look at the connective tissue between a patient and a provider, that requires a lot of security, compliance, and regulatory oversight to do that well, to do it safely. And I don’t think that they are going to tackle those interfaces anytime soon.