Enforcement of service level agreements (SLAs) could be supercharged through real-time alerts flagging violations and risky behavior. But generative AI’s potential to revolutionize third-party agreements has its limits.

By outsourcing business functions, CIOs can reap cost and, in some cases, expertise benefits, but they also reallocate risk from in-house talent to employees of third-party firms largely beyond their oversight.
Service level agreements (SLAs) can provide CIOs with assurances against this reallocation of risk, but traditional SLA metrics and conditions can leave gaps and reporting lags that can fail to capture real-time operational risks or threats until it’s too late.
Take Clorox’s recent lawsuit against Cognizant. The multinational CPG giant, which had outsourced its service desk operations to Cognizant, alleges that Cognizant help desk workers gave out passwords to Clorox systems without using mandatory authentication procedures, resulting in a 2023 breach attributed to Scattered Spider.
Clorox’s lawsuit cites transcripts of help desk calls as evidence of Cognizant’s negligence, but what if those calls been captured, transcribed, and analyzed to send real-time alerts to Clorox management? Could the problem behavior have been discovered early enough to thwart the breach?
Here, generative AI could have a significant impact, as it delivers the capability to capture information from a wide range of communication channels — potentially actions as well via video — and analyze for deviations from what a company has been contracted to deliver. This could deliver near-real-time alerts regarding problematic behavior in a way that could spur a rethinking of the SLA as it is currently practiced.
“This is flipping the whole idea of SLA,” said Kevin Hall, CIO for the Westconsin Credit Union, which has 129,000 members throughout Wisconsin and Minnesota. “You can now have quality of service rather than just performance metrics.”
Of course, Hall also cautioned that under such a scenario CIOs would need to be prepared for a fierce fight from third parties when trying to apply SLA penalties.
“My first big worry is enforcement. You might have a lot of work to claim an SLA violation. [Third parties] will look awfully hard for every example where they are exempt,” Hall said. “When it’s time to collect, that process is going to be painful, a very uphill battle.”
As a practical matter, Hall suggested that CIOs would probably only pursue major violations. “You’ll need to have really big ticket items, so you’ll have clear arguments to make,” he said.
Zachary Lewis, CIO of the 160-year-old University of Health Sciences and Pharmacy in St. Louis, also sees potential from this shift in SLA enforcement.
“With this approach, we could get a really good handle on insider threats. The system could trigger on likely insider threats immediately,” Lewis said. “Or if they laugh about their lack of security or talk smack about their clients, we could be alerted right away.”
Cameron Powell, a technology attorney with the law firm Gregor Wynne Arney, also sees the upside of such an approach for countering legal and compliance risks.
“You will be able to scan Zoom meetings, looking for risk issues. It could look for phrases such as ‘Let’s keep this off email,’” Powell said, giving the example of one of several communication channels where the approach could be applied. “Why not find these issues in real-time before a third party sues you or a whistleblower reports you?”
Friction and additional risks
While generative AI, used in this way, could supercharge SLA enforcement, UHSP St. Louis’ Lewis also noted that it would likely meet significant implementation friction.
“Are we going to need another AI to monitor all of the first AI’s data monitoring? If so, then gen AI becomes its own third-party risk,” Lewis said. Will third-party companies avoid this new monitoring by trying to “sandbox themselves from their customers”?
Lewis also questioned how long such an approach would last. “Are we going to have to do this indefinitely?”
Westconsin Credit Union’s Hall sees upside in the call center, where customers sometimes complain and ask that their complaints be properly registered and logged. “If I am at the call center and [the customer] is complaining about me, the odds of my reporting that are low,” Hall said. “This would change that.”
But such monitoring approaches raise privacy and regulatory concerns, especially for healthcare and financial firms. To tackle this, Hall said the first step would be to make sure real-time transcripts were sanitized to remove any protected information, such as health records or payment details.
“It is kind of a [compliance] nightmare as it would be on us to sanitize. How do you trust and verify that [the gen AI system] is properly doing it without constant auditing?” Hall asked. “It might have so many little holes for leaking [protected data] that I would be hardpressed to go to the board. They would ask, ‘How much risk are you taking on and what is the reward?’”
But, Hall said, he could make an argument to the board that this approach had the potential to sharply improve third-party compliance, thereby strengthening the company’s compliance posture.
“If I could convince them with strategy and culture arguments, it could land with the board,” Hall said.
Still, attorney Powell — and others — stressed that generative AI is far from perfect. There’s a difference between flagging a problem and having sufficiently reliable evidence to do something about it.
For example, gen AI “doesn’t understand empathy” or when people need to say something “to calm a customer down or make a nice connection,” Powell said.
Powell also suggested other use cases, such as video-capture to analyze every aspect of a driver’s delivery process. Was the package delivered when time-stamped? Did the driver steal anything after delivering the package?
“It could turn today’s SLA from a service level agreement to a surveillance level agreement,” Powell said.
What about privacy?
Mark Rasch, a former federal prosecutor who specializes in technology legal issues, argues that companies need to figure out how to take advantage of this source of ubiquitous data.
“You can now do things that were impossible just a couple of years ago. Before, at most, you could do some spot-checks,” said Rasch, who today serves as a professorial lecturer in law at George Washington University Law School and as legal counsel for Unit 221B, a data privacy and security compliance consulting firm. “But what you cando and what is reasonableto do are two very different things.”
Rasch, and other attorneys interviewed, said the law is also slowly learning to function along with gen AI so it’s not yet clear how much this analysis will eventually be allowed by courts.
He pointed to a 2011 United States Supreme Court decision called Sorrell, which explored how much privacy physicians can expect and decided they don’t have much.
Another risk was referenced by Flavio Villanustre, CISO for LexisNexis Risk Solutions Group.
Villanustre said the prudent move is for executives to scan the transcript, but place much more trust in the captured audio. That is because gen AI often hallucinates within transcripts.
Of course, gen AI could just as easily create a bogus audio capture, Villanustre pointed out, as it’s not yet clear that video or audio processed by gen AI can be trusted, forcing CIOs to need direct audio backups that can be trusted and are ostensibly incapable of being changed by gen AI.
“In more complex cases, gen AI can mislead,” Villanustre said.
As for healthcare, attorney Powell said, “Every recording is creating new PHI [protected health information]. Who can access that recording? You may have to create a whole new HIPAA trail for these recordings.”
Similar issues would exist for all other highly regulated enterprises, including financial institutions, energy, transportation, and pharmaceuticals.
If audio or video captures are being analyzed for real-time alerts, could law enforcement or other government agencies demand access? Could a request be placed to listen for someone’s voice and alert authorities if it is detected?
Beyond the SLA
Gary Longsine, CEO at IllumineX, believes the privacy fear may be moot because “clients are recording those calls as well, so that ship has kind of sailed.”
Moreover, gen AI capabilities to track and manage third parties for SLA enforcement could also be applied to an enterprise’s in-house workforce.
Consider when a Macy’s accountant successfully hid $154 million for three years, forcing the retailer to delay and then restate an earnings report. Instead of the audit systems the accountant sidestepped, a gen AI system could perform audits differently “and it would have flagged this right away,” said IDC President Crawford Del Prete.
HR might also find it useful, Powell added, to identify employees who are about to resign.
“You can internalize internal chat to see who is about to leave. People tend to disengage well before they actually leave. There is a change in language and tone that signals disengagement,”he said, adding that gen AI is the first system that could detect that quickly enough to potentially make a change in time.