Why awareness — not policy — is your first line of defense against AI risk.

When assessing AI risk, organizations often focus on the most complex threats: algorithmic bias, intellectual property concerns or emerging regulation. But one of the fastest-growing and most overlooked risks is far simpler — employees may not realize they’re using AI at all.
AI is no longer confined to enterprise innovation labs or data science teams. It’s embedded in everyday workflows through tools like Microsoft Copilot, Google Gemini, email summarizers, CRM chatbots and recruiting platforms. Many employees are using AI daily, often without realizing it.
Nearly all Americans use products that involve AI features, but nearly two-thirds (64%) don’t realize it. Meanwhile, only 24% of workers who received job training in 2024 say it was related to AI use. However, employees are using AI, intentionally or not, and the fear, confusion and unclear policies around its use can create unintentional and unexpected problems.
The result: growing exposure, under-the-table use and policies that may look good on paper but are functionally invisible in practice.
Awareness is the missing link between policy and practice
Strong AI policies are essential: they define expectations, articulate principles, and set the guardrails for responsible use. But policy alone isn’t enough. Even the most well-crafted frameworks risk falling short without a corresponding investment in awareness and enablement.
Employees can’t follow what they don’t fully understand. Many are unaware when AI capabilities are embedded in the tools they use or what responsibilities come with those interactions. Closing that gap requires more than publishing rules; it demands ongoing education and contextual support, especially in decentralized, fast-moving environments.
5 key considerations for building AI literacy and reducing risk
Without a clear understanding of AI tools and policies, there’s a risk of unintentional misuse, shadow AI practices and inconsistent adherence to governance frameworks. Here are five key considerations to help close those knowledge gaps and build an enterprise-wide culture of AI literacy and risk awareness.
1. Start with awareness, not just rules
With generative and predictive tools embedded in everyday platforms, most users engage with AI passively and often unknowingly. That’s why the first step in any enablement effort must be awareness.
Employees should be introduced to AI in an accessible way that is relevant to their workflow and grounded in real examples. It’s not enough to say, “Don’t upload sensitive information to AI tools.” People need to understand what qualifies as an AI tool, when they’re using one and why certain behaviors create risk.
Start with easy-to-grasp definitions. Use language that resonates with non-technical teams. Frame the message not as a restriction but as a shared responsibility — one that protects the organization and empowers smarter decisions at the front lines.
2. Involve employees in shaping the policies
When people feel ownership over the tools and rules that shape their work, they’re far more likely to understand, remember and apply them. For example, asking a group of employees to read the draft AI policy and provide feedback on unclear or overly technical language can spark valuable cross-functional dialogue, reveal gaps in understanding and directly inform revisions to make the final policy more approachable. More importantly, it sends a clear message: This isn’t a top-down document written in a legal or technical vacuum — it’s meant to work in practice.
This kind of participatory approach transforms policy from a static document into a shared standard. It builds credibility and promotes adoption across departments, particularly in complex organizations.
3. Use the “drip method” to reinforce learning
Research on the forgetting curve shows that learners forget more than 50% of new information within an hour of learning it — and even more within a week without reinforcement.
That’s why one-off policy briefings and static training modules often fail to create lasting behavioral change. Instead, organizations should adopt the drip method: delivering small, focused messages at regular intervals through the platforms employees already use, such as email, Slack, Microsoft Teams, or internal dashboards.
This microlearning approach boosts retention and builds long-term familiarity. And when tailored to real-time tools, use cases and evolving regulatory risks, it becomes not just reinforcement but strategic enablement.
4. Tailor training by role and risk
Not all AI use is created equal. A developer using generative models to write code faces different exposures than a marketer using an AI-enabled writing tool. Likewise, executives making decisions based on predictive analytics carry a different set of responsibilities than customer service reps interacting with chatbot platforms.
Risk exposure should drive training depth. Higher-risk roles may need more frequent refreshers or scenario-based simulations, while lower-risk teams may benefit from just-in-time reminders or onboarding briefings.
Create modular learning paths tailored by job function, geography and toolset. Consider the regulatory implications of location. For example, employees in the EU may need to follow different transparency protocols under the EU AI Act than their US counterparts, even when using the same tool. Training must reflect those distinctions.
5. Measure both completion and comprehension
Training metrics often default to completion rates, but a finished module doesn’t guarantee understanding. One of the biggest red flags in any enablement program is silence. When employees aren’t asking questions, offering feedback or flagging uncertainty, it may signal disengagement rather than understanding. Track both quantitative and qualitative indicators, such as the following.
Quantitative metrics can include:
- Percentage of employees who complete required training
- Time spent on modules
- Help desk tickets related to AI tools or policy questions
Qualitative insights may come from:
- Feedback surveys following training
- Focus groups or pilot testing for new tools
- Informal conversations with team leads about what’s working and what’s not
These signals help organizations spot knowledge gaps early and adjust communications accordingly. They also support a more adaptive approach to governance — one where education and oversight evolve in tandem with the increasing use of AI across the business.
Turning awareness into operational strength
As AI continues to integrate into everyday workflows, organizations must start investing in the awareness, understanding and behavior change needed to support AI governance. That means treating AI literacy as an enterprise competency, not just a compliance checkbox.
The risks of inaction are unintentional misuse, inconsistent adoption, growing regulatory exposure and erosion of trust in these new technologies. But the opportunity is just as significant. By enabling employees to recognize, question and engage responsibly with AI, organizations empower their workforce to innovate with clarity and confidence. That’s the real goal of AI enablement: not just protecting the business from what could go wrong but preparing it to move forward successfully in an AI-enabled world.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?