People are sneaking AI into work, risks and all. But with smart guardrails, companies can turn shadow AI into a winning edge.

The risks of sharing legal information, financial data and sensitive code with shadow AI (aka, unauthorized generative AI tools) cannot be understated.
A single data leak can lead to compliance violations, loss of invaluable IP and a decrease in public trust. Nevertheless, according to a recent study, working professionals in the U.S. and Canada aren’t overly concerned about their usage of shadow AI.
In fact, the vast majority (91%) of surveyed employees said they believe that shadow AI poses no risk, very little risk or some risk that’s outweighed by the reward. Perhaps even more disturbing, over a third of employees admitted to sharing sensitive information with these unauthorized AI tools.
Of the employees sharing data with shadow AI, 32% shared non-public product information; another 33% shared confidential client information; and 37% shared internal documents related to strategy or financial data. Were this sensitive data to leave the organization, the damage could be devastating and long-lasting.
Despite the risks, shadow AI is increasingly prevalent
According to the study, which featured 350 IT decision-makers (ITDMs) and 350 working professionals across enterprises in the US and Canada, shadow AI is definitely on the rise. A whopping 93% of employees admitted to inputting data into generative AI tools without corporate approval. What’s more, 60% of employees said they are using unapproved AI tools more than they were a year ago.
Across the board, ITDMs and working professionals are seeing an increase in shadow AI. In North America, 70% of ITDMs reported seeing unauthorized AI use in their organizations, and 82% of US-based employees said they knew coworkers who used AI tools without authorization.
The impetuses to use unsanctioned AI tools are varied. Summarizing meeting notes and calls (56%) is a popular use case, as is idea brainstorming (55%), analyzing data and reports (47%), drafting or editing emails and documents (47%) and generating client-facing content (34%).
Not only does this study highlight a rise in shadow AI usage and the related security concerns, but it also points out a general lack of adequate governance.
Governance concerns and leadership blind spots
Unlike the working employees (of whom, 91% see little to no risk in using shadow AI), nearly all ITDMs (97%) acknowledge that the use of shadow AI poses significant risks to their enterprises. Most ITDMs (63%) say potential data leakage is the primary risk of shadow AI; however, risks related to hallucinations, discrimination and lack of explainability are prevalent as well.
Although ITDMs have approved some AI solutions for employee use — genAI text tools (73%), AI writing tools (60%) and code assistants (59%) — the ITDMs are playing both catch-up and whack-a-mole when it comes to shadow AI governance.
Most ITDMs (85%) report that employees are adopting AI faster than their IT teams can assess the tools, and more than half (53%) believe that their employees’ use of personal devices for work-related AI tasks is creating blind spots in their organization’s security posture. Given this precarious situation, enterprises should have clear, enforceable AI governance policies in place. That said, it appears that few actually do; only 54% of ITDMs say their policies on unauthorized AI use in the organization are effective.
Transforming the IT department from a gatekeeper into an enabler
Although this study emphasizes the prevalence of shadow AI and its corresponding security risks, there is an underlying opportunity here. If implemented into organizations correctly, generative AI tools can provide a strategic edge. By building transparent, collaborative and secure AI ecosystems, IT teams can help their employees work faster and more efficiently while also securing sensitive data and minimizing risks related to data leaks and compliance violations.
The first step is to assess how employees are using generative AI tools. Once AI usage patterns are established, create an official list of sanctioned tools. During the vendor due diligence process, consider utilizing API access to cloud-based AI tools that offer robust security, data control and compliance measures.
Another approach, which might be prohibitively expensive for smaller organizations, is to build a proprietary AI stack in-house. Some organizations may opt to build customized, in-house models on top of open-source models from the likes of Anthropic, OpenAI, Meta (Llama) or DeepSeek; then, they can further enhance these models via (RAG) retrieval-augmented generation. By going this route, one can ensure that all sensitive corporate data remains inside the network.
After assessing employees’ AI usage, conducting vendor due diligence and getting a model up and running, guardrails must be put in place. This entails auditing model outputs, creating role-based access controls and flagging any unauthorized access in real-time.
Rectify any disconnect between IT personnel and senior leadership
In order to establish organization-wide AI alignment, everyone should be on the same page. Unfortunately, this is rarely the case. According to the recent study, 90% of employees trust shadow AI tools to protect their data, and 50% believe there’s little to no risk in using these unapproved tools.
To be sure, AI training programs are needed to educate employees about the risks inherent in using unsanctioned AI tools. Also, consider creating AI sandboxes, where employees can test out new AI tools, and reward personnel who follow generative AI best practices.
Given that only 31% of ITDMs believe that senior leaders from other departments fully understand the risks posed by shadow AI, it is clear that senior leadership needs education as well. This current disconnect between ITDMs and other executives creates an untenable governance vacuum. Everyone needs to get on the same page.
The main takeaway is that shadow AI poses a bevy of threats, not least of which is the potential for data breaches that expose sensitive data. As the ManageEngine study showed, 32% of employees admitted to entering confidential client data into AI tools without confirming company approval, and another 37% admitted to entering private, internal company data into such tools.
The danger is palpable, but so is the opportunity. If IT leaders can shift from playing defense to building secure AI ecosystems that employees feel empowered to use, a strategic advantage can be reached.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?