Each night, when most medical students at NYU Grossman School of Medicine and NYU Grossman Long Island School of Medicine are asleep, AI is busy at work on their behalf.
An AI application takes anonymized diagnostic codes from six in-patient hospitals and more than 300 outpatient locations across the NYU Langone Health system, matches them with relevant research and diagnosis information and emails the findings to students in time for their morning coffee.
The platform is one example of agentic AI, or AI systems that can make decisions and act autonomously.
"It's really not making a change to the AI itself but making a change to the surrounding components to equip that AI to make more relevant decisions," said David Brauchler, technical director at NCC Group, a cyber security consultancy and managed services provider in the U.K.
But agentic AI can only work if it's secure, well-maintained and applied to the right use cases. Here are five questions CIOs should ask before deploying the technology.
1. What problem will agentic AI solve?
When companies start deploying AI, they should put together a strategic plan to make sure that the right technology is being applied to the right use case, said Nader Mherabi, EVP, vice dean, chief digital and information officer at NYU Langone Health. "You can't have AI outside of the workflow because people won't use it."
NYU Langone has a range of AI-related projects using different models. AI agents made sense for matching cases with research because it was a relatively straightforward use case, and one that could provide highly beneficial results, according to Mherabi.
"It's very powerful in how it accesses the data and compiles the data and comes up with medical knowledge, and then serves a personal set of learning to students," Mherabi said.
The agent does not touch highly sensitive — and highly regulated — patient data, making it a low-risk use of AI. Information is disconnected from patient encounters and runs within NYU Langone’s secure network.
"It can operate pretty much in an autonomous format," Mherabi said.
2. Do we have the data to make it work?
An agent is only going to be effective if it has the right data — and if that data is prepared properly, said Tom Coshow, senior director analyst at Gartner.
The data also has to be aligned with an exact use case. Otherwise, the agent will be pulling AI from places it shouldn't, which can lead to wrong answers or, in a worst-case scenario, give workers access to data they shouldn't be able to see.
Data needs to be maintained too, which requires the right talent, according to Coshow. "There is an emerging field of data experts that are going to get really good at maintaining data in a way that is friendly to AI agents," Coshow said.
3. Will data be secure?
Beyond ensuring workers don’t gain access to information they shouldn't have, organizations should factor in the security risks that agentic AI could carry.
"Everything that you add to your application environments or networks are additional attack networks that you have to govern," said NCC Group’s Brauchler.
Companies should conduct security testing when considering adding AI to systems and platforms that weren't designed with AI or AI-related risk in mind. NCC Group recently tested a platform and found it could manipulate the system by adding hidden text.
"Because the platform wasn't designed for AI, it created a new hole in their system that wasn't there before," Brauchler said.
Secure adoption requires having AI-trained engineers who can prepare and manage risks. Otherwise, "you're effectively going to be adding an attack surface without having engineers who have the experience to address AI-specific risks," he added.
4. Where does agentic AI fit into the company's tech strategy?
Not all agentic AI is created equal, so CIOs need to ask what platforms are appropriate in the context of the overall technology strategy.
"Am I only going to build in my cloud provider or am I going to startups that have special industry skills?" said Coshow.
CIOs should also ask how many agentic AI systems they are comfortable with, and if they have the ability to monitor all agents if they are operating on different platforms.
NYU Langone approaches AI-related projects one piece at a time. That could mean creating smaller agentic AI agents that could, like Lego bricks, be connected into a larger whole. This way, if one isn't working, it can be pulled out and fixed without tearing out the whole operation, said Mherabi.
5. How will agentic AI be monitored?
Agentic AI is not a set-it-and-forget-it type of application. Instead, it needs to be continuously monitored to make sure that the data is accurate, is being used appropriately and benefits end users.
An application using agentic AI "requires constant care," said Mherabi.
Close monitoring has enabled the health system's agentic AI application to continue giving tailored information to medical students since it was deployed last year. It met its use case goal and continues to do so.
"I think it's nice for kids to get up in the morning, log into the portal and see relevant materials," said Mherabi. "Who wouldn't like that?"