Tech companies are trying to get healthcare companies to adopt artificial intelligence tools, hoping they can pull in revenue by appealing to the industry’s need to reduce costs and tackle clinician burden.
Google is one such company. The search and cloud computing giant has unveiled a number of health AI products in the past few years, including a large language model called Med-PaLM trained specifically on medical data, generative AI products for healthcare organizations and a platform to help companies create their own AI agents.
Google also offers a tool that allows clinicians to search for and answer questions about information in patient notes and other medical documentation. That product, called Vertex AI Search for Healthcare, has quickly ramped up after launching last March. Google announced that Vertex AI Search was multimodal earlier this month, meaning it can now understand images as well as text, allowing the tool to scrub charts and scans for relevant information.
Google has inked deals with major EHR vendors and leading health systems to integrate its AI into their workflows. But not everyone is gung ho about AI, especially as the models become more evolved and the potential for mistakes grows.
Concerns include hallucinations, when AI makes up a response; omissions, when AI leaves important information out; and model drift, when AI becomes less reliable over time. Meanwhile, fledgling oversight and a lack of regulation are also hampering adoption as the industry confronts weighty questions of accuracy, privacy and bias.
Healthcare Dive sat down with Aashima Gupta, the head of healthcare for Google Cloud, to chat about Google’s work in health AI and the future of the technology — including how AI is evolving from a task-based helper to a collaborator for clinicians, uncertainty from the Trump administration and why she’s excited about agentic AI.
Editor’s Note: This interview has been edited for clarity and brevity.
HEALTHCARE DIVE: Vertex AI Search for healthcare can now understand images. Why was this a necessary update?
AASHIMA GUPTA: Healthcare information is scattered in different forms and types. During an annual diabetes foot exam, for example, when a physician is looking for ulcers, they mark on a diagram of a foot where it’s callous, pre-ulcer and ulcer, using different symbols. Healthcare is full of examinations like this. Now, with multimodal, you’re able to see and contextualize that diagram of a foot and extract that information, including any potential ulcers, and put it into the medical record automatically. That saves clinicians time, because they don’t need to interpret this themselves.

Aashima Gupta, Google Cloud’s head of healthcare
Permission granted by Google
Healthcare, as an industry, has a lot of paperwork. We talk about burnout. These are the type of innovations we want to add to Search to reduce it. Last year, we said Search was ‘semantic’, meaning it knows what people mean when they say ‘diabetes’ or ‘A1C’ — it knows clinical concepts and how they’re related. Now we’re taking that and applying that to forms in an exam room that have different pictures. So our results are much more accurate and helpful.
What other inputs might Google want to add to the search-and-answer tool — sound?
You’re right. We will continue to add different modalities here. Sound and videos, as an example.
What else is Google working on in healthcare that you’re excited about?
We are very excited about agentic AI. The last few years have been a lot about generative AI, which is task-based — ‘Give me a discharge summary. Give me a nurse handoff. Write me a referral.’
AI agents are a leap forward, because they can think multiple steps ahead. They can plan unique steps for the goal in mind.
Imagine this in healthcare workflows. Let’s say I want to figure out, in my revenue cycle, where there’s variability by payer type, market, for a certain CPT code — that’s a multistep process. And that’s what agentic AI offers. Imagine hundreds of agents helping nurses and physicians do their jobs.
Given the number of companies offering AI agents, what’s Google’s elevator pitch?
We believe there’s a need for centralized coordination and management, and that’s the platform we provide, and we will give agents the tools.
So if a company is building their first-party agent, they have three things: An ability to use [Google’s family of large language models] Gemini, our search functionality based on clinical knowledge and an orchestration layer.
We believe that people have choice. Some people they really want to build themselves. Some want a partner. How do you organize all of that? That’s where we want to be.
Google doesn’t publicly share the number of healthcare organizations using its AI. How would you characterize uptake?
We have relationships with Basset Health, Highmark Health, Mayo Clinic, HCA — customers using generative AI to streamline prior authorization submissions, close screening gaps, enhance radiology workflows and more.
Healthcare used to be a laggard in technology adoption. But this AI is different — this is actually going into the back office, the revenue cycle, claims, to tackle burnout. You are seeing this move much, much faster. You are seeing generative AI pilots move from experimentation to scaling.
As for agentic AI, I’d say healthcare companies are definitely adapting to it. But it’s a newer space.